Test Report: QEMU_macOS 18170

                    
                      de283e0d965b1b3530e2c6b6aa77e702081059d3:2024-02-13:33128
                    
                

Test fail (85/271)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 39.19
7 TestDownloadOnly/v1.16.0/kubectl 0
31 TestOffline 10.32
39 TestAddons/parallel/Ingress 34.62
54 TestCertOptions 12.24
55 TestCertExpiration 197.62
56 TestDockerFlags 12.6
57 TestForceSystemdFlag 11.7
58 TestForceSystemdEnv 10.16
103 TestFunctional/parallel/ServiceCmdConnect 27.39
170 TestImageBuild/serial/BuildWithBuildArg 1.1
179 TestIngressAddonLegacy/serial/ValidateIngressAddons 56.98
214 TestMountStart/serial/StartWithMountFirst 10.76
217 TestMultiNode/serial/FreshStart2Nodes 9.8
218 TestMultiNode/serial/DeployApp2Nodes 70.31
219 TestMultiNode/serial/PingHostFrom2Pods 0.09
220 TestMultiNode/serial/AddNode 0.07
221 TestMultiNode/serial/MultiNodeLabels 0.06
222 TestMultiNode/serial/ProfileList 0.1
223 TestMultiNode/serial/CopyFile 0.06
224 TestMultiNode/serial/StopNode 0.14
225 TestMultiNode/serial/StartAfterStop 0.11
226 TestMultiNode/serial/RestartKeepsNodes 5.38
227 TestMultiNode/serial/DeleteNode 0.11
228 TestMultiNode/serial/StopMultiNode 0.16
229 TestMultiNode/serial/RestartMultiNode 5.25
230 TestMultiNode/serial/ValidateNameConflict 19.87
234 TestPreload 9.96
236 TestScheduledStopUnix 9.89
237 TestSkaffold 17.79
240 TestRunningBinaryUpgrade 656.76
242 TestKubernetesUpgrade 15.22
255 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3
256 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 2.51
258 TestStoppedBinaryUpgrade/Upgrade 616.43
260 TestPause/serial/Start 10.09
270 TestNoKubernetes/serial/StartWithK8s 9.87
271 TestNoKubernetes/serial/StartWithStopK8s 5.87
272 TestNoKubernetes/serial/Start 5.91
276 TestNoKubernetes/serial/StartNoArgs 5.87
278 TestNetworkPlugins/group/auto/Start 9.72
279 TestNetworkPlugins/group/flannel/Start 9.81
280 TestNetworkPlugins/group/kindnet/Start 9.88
281 TestNetworkPlugins/group/enable-default-cni/Start 9.94
282 TestNetworkPlugins/group/bridge/Start 9.68
283 TestNetworkPlugins/group/kubenet/Start 9.69
284 TestNetworkPlugins/group/custom-flannel/Start 9.8
285 TestNetworkPlugins/group/calico/Start 9.7
287 TestNetworkPlugins/group/false/Start 9.8
289 TestStartStop/group/old-k8s-version/serial/FirstStart 9.89
290 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
291 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
294 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
295 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
296 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
297 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
298 TestStartStop/group/old-k8s-version/serial/Pause 0.11
300 TestStartStop/group/no-preload/serial/FirstStart 10.02
302 TestStartStop/group/embed-certs/serial/FirstStart 11.45
303 TestStartStop/group/no-preload/serial/DeployApp 0.1
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
307 TestStartStop/group/no-preload/serial/SecondStart 7.02
308 TestStartStop/group/embed-certs/serial/DeployApp 0.09
309 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
312 TestStartStop/group/embed-certs/serial/SecondStart 5.2
313 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
314 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
315 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
316 TestStartStop/group/no-preload/serial/Pause 0.11
317 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
318 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
319 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
320 TestStartStop/group/embed-certs/serial/Pause 0.11
322 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.79
324 TestStartStop/group/newest-cni/serial/FirstStart 11.93
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.13
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 7.1
334 TestStartStop/group/newest-cni/serial/SecondStart 5.2
335 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
336 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
337 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
338 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
341 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
342 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.16.0/json-events (39.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-048000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-048000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (39.191982834s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0a0b3e21-3075-4692-95ae-0c37be2e6443","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-048000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5cdabd09-cd53-48b5-afe2-0f3a1f304371","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18170"}}
	{"specversion":"1.0","id":"3a9951e4-e869-4a39-879f-74f5442ab445","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig"}}
	{"specversion":"1.0","id":"fc0ebf82-3298-48fc-aa8f-9d9b43dbcb04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"412609fd-73ce-4158-8323-d91004dc9b62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"87910dd7-1638-4741-86ca-e4cfa5bad17d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube"}}
	{"specversion":"1.0","id":"4b81d21c-6c4d-4b20-9af3-4ea9481b44b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"713b0c20-a9fe-43e8-bf1a-53118885c789","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2dc14802-e998-4d9c-8a23-7dd553e66d9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"64224d3e-0955-4e97-8ca0-b797bcf62fcb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ab7b0b97-921c-4b5f-9671-b211dbef250e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-048000 in cluster download-only-048000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"62784aad-bdbb-46c0-bfa8-66215875350d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0b6797c9-ca6c-45fa-a641-17c67a140e0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/18170-979/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106667080 0x106667080 0x106667080 0x106667080 0x106667080 0x106667080 0x106667080] Decompressors:map[bz2:0x140004961c0 gz:0x140004961c8 tar:0x14000496170 tar.bz2:0x14000496180 tar.gz:0x14000496190 tar.xz:0x140004961a0 tar.zst:0x140004961b0 tbz2:0x14000496180 tgz:0x1400049
6190 txz:0x140004961a0 tzst:0x140004961b0 xz:0x140004961d0 zip:0x140004961e0 zst:0x140004961d8] Getters:map[file:0x14000cf8600 http:0x140008d42d0 https:0x140008d4320] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"f0218734-d443-46d0-a349-cd98c0aa04df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 14:38:44.220155    1409 out.go:291] Setting OutFile to fd 1 ...
	I0213 14:38:44.220308    1409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:38:44.220311    1409 out.go:304] Setting ErrFile to fd 2...
	I0213 14:38:44.220314    1409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:38:44.220427    1409 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	W0213 14:38:44.220511    1409 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18170-979/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18170-979/.minikube/config/config.json: no such file or directory
	I0213 14:38:44.221732    1409 out.go:298] Setting JSON to true
	I0213 14:38:44.238782    1409 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":346,"bootTime":1707863578,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 14:38:44.238854    1409 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 14:38:44.244641    1409 out.go:97] [download-only-048000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 14:38:44.247703    1409 out.go:169] MINIKUBE_LOCATION=18170
	I0213 14:38:44.244780    1409 notify.go:220] Checking for updates...
	W0213 14:38:44.244798    1409 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball: no such file or directory
	I0213 14:38:44.255621    1409 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 14:38:44.258698    1409 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 14:38:44.261685    1409 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 14:38:44.264669    1409 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	W0213 14:38:44.270687    1409 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0213 14:38:44.270893    1409 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 14:38:44.275576    1409 out.go:97] Using the qemu2 driver based on user configuration
	I0213 14:38:44.275593    1409 start.go:298] selected driver: qemu2
	I0213 14:38:44.275605    1409 start.go:902] validating driver "qemu2" against <nil>
	I0213 14:38:44.275669    1409 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 14:38:44.278673    1409 out.go:169] Automatically selected the socket_vmnet network
	I0213 14:38:44.284260    1409 start_flags.go:392] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0213 14:38:44.284337    1409 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 14:38:44.284445    1409 cni.go:84] Creating CNI manager for ""
	I0213 14:38:44.284460    1409 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 14:38:44.284463    1409 start_flags.go:321] config:
	{Name:download-only-048000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-048000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 14:38:44.289924    1409 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 14:38:44.293697    1409 out.go:97] Downloading VM boot image ...
	I0213 14:38:44.293728    1409 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso
	I0213 14:39:02.399186    1409 out.go:97] Starting control plane node download-only-048000 in cluster download-only-048000
	I0213 14:39:02.399231    1409 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 14:39:02.699758    1409 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0213 14:39:02.699847    1409 cache.go:56] Caching tarball of preloaded images
	I0213 14:39:02.700583    1409 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 14:39:02.706193    1409 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0213 14:39:02.706221    1409 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0213 14:39:03.321123    1409 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0213 14:39:21.981450    1409 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0213 14:39:21.981602    1409 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0213 14:39:22.633887    1409 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0213 14:39:22.634076    1409 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/download-only-048000/config.json ...
	I0213 14:39:22.634094    1409 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/download-only-048000/config.json: {Name:mkcf4d3fc36f141969847e9612eb45eb33c0fc17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:39:22.634334    1409 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 14:39:22.634521    1409 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0213 14:39:23.335632    1409 out.go:169] 
	W0213 14:39:23.340670    1409 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/18170-979/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106667080 0x106667080 0x106667080 0x106667080 0x106667080 0x106667080 0x106667080] Decompressors:map[bz2:0x140004961c0 gz:0x140004961c8 tar:0x14000496170 tar.bz2:0x14000496180 tar.gz:0x14000496190 tar.xz:0x140004961a0 tar.zst:0x140004961b0 tbz2:0x14000496180 tgz:0x14000496190 txz:0x140004961a0 tzst:0x140004961b0 xz:0x140004961d0 zip:0x140004961e0 zst:0x140004961d8] Getters:map[file:0x14000cf8600 http:0x140008d42d0 https:0x140008d4320] Dir:false ProgressListener:
<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0213 14:39:23.340693    1409 out_reason.go:110] 
	W0213 14:39:23.348518    1409 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 14:39:23.352553    1409 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-048000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (39.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18170-979/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18170-979/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.32s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-025000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-025000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (10.188813209s)

                                                
                                                
-- stdout --
	* [offline-docker-025000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-025000 in cluster offline-docker-025000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-025000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:02:46.254494    3074 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:02:46.254617    3074 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:02:46.254621    3074 out.go:304] Setting ErrFile to fd 2...
	I0213 15:02:46.254623    3074 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:02:46.254762    3074 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:02:46.255911    3074 out.go:298] Setting JSON to false
	I0213 15:02:46.273358    3074 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1788,"bootTime":1707863578,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:02:46.273455    3074 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:02:46.278148    3074 out.go:177] * [offline-docker-025000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:02:46.286113    3074 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:02:46.290113    3074 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:02:46.286112    3074 notify.go:220] Checking for updates...
	I0213 15:02:46.296039    3074 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:02:46.299120    3074 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:02:46.302101    3074 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:02:46.305057    3074 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:02:46.308512    3074 config.go:182] Loaded profile config "multinode-078000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:02:46.308563    3074 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:02:46.312029    3074 out.go:177] * Using the qemu2 driver based on user configuration
	I0213 15:02:46.319103    3074 start.go:298] selected driver: qemu2
	I0213 15:02:46.319113    3074 start.go:902] validating driver "qemu2" against <nil>
	I0213 15:02:46.319120    3074 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:02:46.321172    3074 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:02:46.324097    3074 out.go:177] * Automatically selected the socket_vmnet network
	I0213 15:02:46.327197    3074 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:02:46.327234    3074 cni.go:84] Creating CNI manager for ""
	I0213 15:02:46.327242    3074 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:02:46.327247    3074 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 15:02:46.327253    3074 start_flags.go:321] config:
	{Name:offline-docker-025000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-025000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs:}
	I0213 15:02:46.331830    3074 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:02:46.339092    3074 out.go:177] * Starting control plane node offline-docker-025000 in cluster offline-docker-025000
	I0213 15:02:46.343074    3074 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:02:46.343105    3074 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 15:02:46.343118    3074 cache.go:56] Caching tarball of preloaded images
	I0213 15:02:46.343190    3074 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:02:46.343195    3074 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 15:02:46.343271    3074 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/offline-docker-025000/config.json ...
	I0213 15:02:46.343282    3074 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/offline-docker-025000/config.json: {Name:mk51ed28f0d4eb5eea97758914613843644165ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:02:46.343497    3074 start.go:365] acquiring machines lock for offline-docker-025000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:02:46.343527    3074 start.go:369] acquired machines lock for "offline-docker-025000" in 23.042µs
	I0213 15:02:46.343538    3074 start.go:93] Provisioning new machine with config: &{Name:offline-docker-025000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-025000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:02:46.343571    3074 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:02:46.347117    3074 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0213 15:02:46.362981    3074 start.go:159] libmachine.API.Create for "offline-docker-025000" (driver="qemu2")
	I0213 15:02:46.363007    3074 client.go:168] LocalClient.Create starting
	I0213 15:02:46.363098    3074 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:02:46.363132    3074 main.go:141] libmachine: Decoding PEM data...
	I0213 15:02:46.363140    3074 main.go:141] libmachine: Parsing certificate...
	I0213 15:02:46.363180    3074 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:02:46.363204    3074 main.go:141] libmachine: Decoding PEM data...
	I0213 15:02:46.363214    3074 main.go:141] libmachine: Parsing certificate...
	I0213 15:02:46.363565    3074 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:02:46.489051    3074 main.go:141] libmachine: Creating SSH key...
	I0213 15:02:46.538216    3074 main.go:141] libmachine: Creating Disk image...
	I0213 15:02:46.538228    3074 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:02:46.538461    3074 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/offline-docker-025000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/offline-docker-025000/disk.qcow2
	I0213 15:02:46.551194    3074 main.go:141] libmachine: STDOUT: 
	I0213 15:02:46.551232    3074 main.go:141] libmachine: STDERR: 
	I0213 15:02:46.551298    3074 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/offline-docker-025000/disk.qcow2 +20000M
	I0213 15:02:46.562906    3074 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:02:46.562925    3074 main.go:141] libmachine: STDERR: 
	I0213 15:02:46.562951    3074 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/offline-docker-025000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/offline-docker-025000/disk.qcow2
	I0213 15:02:46.562956    3074 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:02:46.562997    3074 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/offline-docker-025000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/offline-docker-025000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/offline-docker-025000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:ed:f8:aa:f0:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/offline-docker-025000/disk.qcow2
	I0213 15:02:46.565181    3074 main.go:141] libmachine: STDOUT: 
	I0213 15:02:46.565210    3074 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:02:46.565237    3074 client.go:171] LocalClient.Create took 202.229084ms
	I0213 15:02:48.565253    3074 start.go:128] duration metric: createHost completed in 2.221744s
	I0213 15:02:48.565270    3074 start.go:83] releasing machines lock for "offline-docker-025000", held for 2.221806292s
	W0213 15:02:48.565283    3074 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:02:48.574399    3074 out.go:177] * Deleting "offline-docker-025000" in qemu2 ...
	W0213 15:02:48.581739    3074 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:02:48.581755    3074 start.go:709] Will try again in 5 seconds ...
	I0213 15:02:53.583722    3074 start.go:365] acquiring machines lock for offline-docker-025000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:02:53.583879    3074 start.go:369] acquired machines lock for "offline-docker-025000" in 102.042µs
	I0213 15:02:53.583907    3074 start.go:93] Provisioning new machine with config: &{Name:offline-docker-025000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-025000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:02:53.583963    3074 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:02:53.698260    3074 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0213 15:02:53.728507    3074 start.go:159] libmachine.API.Create for "offline-docker-025000" (driver="qemu2")
	I0213 15:02:53.728540    3074 client.go:168] LocalClient.Create starting
	I0213 15:02:53.728682    3074 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:02:53.728734    3074 main.go:141] libmachine: Decoding PEM data...
	I0213 15:02:53.728748    3074 main.go:141] libmachine: Parsing certificate...
	I0213 15:02:53.728792    3074 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:02:53.728824    3074 main.go:141] libmachine: Decoding PEM data...
	I0213 15:02:53.728836    3074 main.go:141] libmachine: Parsing certificate...
	I0213 15:02:53.729227    3074 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:02:54.220223    3074 main.go:141] libmachine: Creating SSH key...
	I0213 15:02:54.336806    3074 main.go:141] libmachine: Creating Disk image...
	I0213 15:02:54.336814    3074 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:02:54.336993    3074 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/offline-docker-025000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/offline-docker-025000/disk.qcow2
	I0213 15:02:54.351713    3074 main.go:141] libmachine: STDOUT: 
	I0213 15:02:54.351737    3074 main.go:141] libmachine: STDERR: 
	I0213 15:02:54.351803    3074 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/offline-docker-025000/disk.qcow2 +20000M
	I0213 15:02:54.364473    3074 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:02:54.364494    3074 main.go:141] libmachine: STDERR: 
	I0213 15:02:54.364516    3074 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/offline-docker-025000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/offline-docker-025000/disk.qcow2
	I0213 15:02:54.364523    3074 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:02:54.364561    3074 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/offline-docker-025000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/offline-docker-025000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/offline-docker-025000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:6b:9e:dc:be:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/offline-docker-025000/disk.qcow2
	I0213 15:02:54.366954    3074 main.go:141] libmachine: STDOUT: 
	I0213 15:02:54.366976    3074 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:02:54.366990    3074 client.go:171] LocalClient.Create took 638.465417ms
	I0213 15:02:56.369033    3074 start.go:128] duration metric: createHost completed in 2.785142709s
	I0213 15:02:56.369050    3074 start.go:83] releasing machines lock for "offline-docker-025000", held for 2.785250291s
	W0213 15:02:56.369115    3074 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-025000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-025000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:02:56.376251    3074 out.go:177] 
	W0213 15:02:56.388157    3074 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:02:56.388163    3074 out.go:239] * 
	* 
	W0213 15:02:56.388655    3074 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:02:56.399169    3074 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-025000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:523: *** TestOffline FAILED at 2024-02-13 15:02:56.412228 -0800 PST m=+1452.343161584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-025000 -n offline-docker-025000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-025000 -n offline-docker-025000: exit status 7 (34.220125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-025000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-025000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-025000
--- FAIL: TestOffline (10.32s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (34.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-975000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-975000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-975000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [144c4b46-b3a7-4134-baa3-e5977e143324] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [144c4b46-b3a7-4134-baa3-e5977e143324] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.002997959s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-arm64 -p addons-975000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-975000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-darwin-arm64 -p addons-975000 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.2: exit status 1 (15.031711209s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-darwin-arm64 -p addons-975000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 -p addons-975000 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-darwin-arm64 -p addons-975000 addons disable ingress --alsologtostderr -v=1: (7.225854958s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-975000 -n addons-975000
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-975000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 13 Feb 24 14:39 PST | 13 Feb 24 14:39 PST |
	| delete  | -p download-only-938000                                                                     | download-only-938000 | jenkins | v1.32.0 | 13 Feb 24 14:39 PST | 13 Feb 24 14:39 PST |
	| start   | -o=json --download-only                                                                     | download-only-091000 | jenkins | v1.32.0 | 13 Feb 24 14:39 PST |                     |
	|         | -p download-only-091000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 13 Feb 24 14:40 PST | 13 Feb 24 14:40 PST |
	| delete  | -p download-only-091000                                                                     | download-only-091000 | jenkins | v1.32.0 | 13 Feb 24 14:40 PST | 13 Feb 24 14:40 PST |
	| delete  | -p download-only-048000                                                                     | download-only-048000 | jenkins | v1.32.0 | 13 Feb 24 14:40 PST | 13 Feb 24 14:40 PST |
	| delete  | -p download-only-938000                                                                     | download-only-938000 | jenkins | v1.32.0 | 13 Feb 24 14:40 PST | 13 Feb 24 14:40 PST |
	| delete  | -p download-only-091000                                                                     | download-only-091000 | jenkins | v1.32.0 | 13 Feb 24 14:40 PST | 13 Feb 24 14:40 PST |
	| start   | --download-only -p                                                                          | binary-mirror-956000 | jenkins | v1.32.0 | 13 Feb 24 14:40 PST |                     |
	|         | binary-mirror-956000                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49326                                                                      |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-956000                                                                     | binary-mirror-956000 | jenkins | v1.32.0 | 13 Feb 24 14:40 PST | 13 Feb 24 14:40 PST |
	| addons  | enable dashboard -p                                                                         | addons-975000        | jenkins | v1.32.0 | 13 Feb 24 14:40 PST |                     |
	|         | addons-975000                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-975000        | jenkins | v1.32.0 | 13 Feb 24 14:40 PST |                     |
	|         | addons-975000                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-975000 --wait=true                                                                | addons-975000        | jenkins | v1.32.0 | 13 Feb 24 14:40 PST | 13 Feb 24 14:43 PST |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=qemu2                                                                |                      |         |         |                     |                     |
	|         |  --addons=ingress                                                                           |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| ip      | addons-975000 ip                                                                            | addons-975000        | jenkins | v1.32.0 | 13 Feb 24 14:43 PST | 13 Feb 24 14:43 PST |
	| addons  | addons-975000 addons disable                                                                | addons-975000        | jenkins | v1.32.0 | 13 Feb 24 14:43 PST | 13 Feb 24 14:43 PST |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-975000 addons                                                                        | addons-975000        | jenkins | v1.32.0 | 13 Feb 24 14:43 PST | 13 Feb 24 14:43 PST |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-975000        | jenkins | v1.32.0 | 13 Feb 24 14:44 PST | 13 Feb 24 14:44 PST |
	|         | addons-975000                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-975000 ssh curl -s                                                                   | addons-975000        | jenkins | v1.32.0 | 13 Feb 24 14:44 PST | 13 Feb 24 14:44 PST |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-975000 ip                                                                            | addons-975000        | jenkins | v1.32.0 | 13 Feb 24 14:44 PST | 13 Feb 24 14:44 PST |
	| addons  | addons-975000 addons                                                                        | addons-975000        | jenkins | v1.32.0 | 13 Feb 24 14:44 PST | 13 Feb 24 14:44 PST |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-975000 addons                                                                        | addons-975000        | jenkins | v1.32.0 | 13 Feb 24 14:44 PST | 13 Feb 24 14:44 PST |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-975000 addons disable                                                                | addons-975000        | jenkins | v1.32.0 | 13 Feb 24 14:44 PST | 13 Feb 24 14:44 PST |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-975000 addons disable                                                                | addons-975000        | jenkins | v1.32.0 | 13 Feb 24 14:44 PST | 13 Feb 24 14:44 PST |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| ssh     | addons-975000 ssh cat                                                                       | addons-975000        | jenkins | v1.32.0 | 13 Feb 24 14:44 PST | 13 Feb 24 14:44 PST |
	|         | /opt/local-path-provisioner/pvc-69fc1814-f173-4904-b8e0-9dadd6946f89_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-975000 addons disable                                                                | addons-975000        | jenkins | v1.32.0 | 13 Feb 24 14:44 PST |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 14:40:13
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 14:40:13.314553    1603 out.go:291] Setting OutFile to fd 1 ...
	I0213 14:40:13.314661    1603 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:40:13.314663    1603 out.go:304] Setting ErrFile to fd 2...
	I0213 14:40:13.314666    1603 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:40:13.314796    1603 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 14:40:13.315954    1603 out.go:298] Setting JSON to false
	I0213 14:40:13.332416    1603 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":435,"bootTime":1707863578,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 14:40:13.332477    1603 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 14:40:13.336833    1603 out.go:177] * [addons-975000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 14:40:13.343854    1603 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 14:40:13.343891    1603 notify.go:220] Checking for updates...
	I0213 14:40:13.350858    1603 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 14:40:13.357815    1603 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 14:40:13.364808    1603 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 14:40:13.367822    1603 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 14:40:13.370814    1603 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 14:40:13.374978    1603 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 14:40:13.378822    1603 out.go:177] * Using the qemu2 driver based on user configuration
	I0213 14:40:13.385829    1603 start.go:298] selected driver: qemu2
	I0213 14:40:13.385834    1603 start.go:902] validating driver "qemu2" against <nil>
	I0213 14:40:13.385839    1603 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 14:40:13.388280    1603 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 14:40:13.391826    1603 out.go:177] * Automatically selected the socket_vmnet network
	I0213 14:40:13.394944    1603 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 14:40:13.394982    1603 cni.go:84] Creating CNI manager for ""
	I0213 14:40:13.394989    1603 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 14:40:13.394994    1603 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 14:40:13.395000    1603 start_flags.go:321] config:
	{Name:addons-975000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-975000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs:}
	I0213 14:40:13.399710    1603 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 14:40:13.407819    1603 out.go:177] * Starting control plane node addons-975000 in cluster addons-975000
	I0213 14:40:13.410687    1603 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 14:40:13.410702    1603 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 14:40:13.410714    1603 cache.go:56] Caching tarball of preloaded images
	I0213 14:40:13.410774    1603 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 14:40:13.410780    1603 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 14:40:13.411024    1603 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/config.json ...
	I0213 14:40:13.411036    1603 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/config.json: {Name:mk962f8987ac305da88ebf34537b73cab0c7c61c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:40:13.411415    1603 start.go:365] acquiring machines lock for addons-975000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 14:40:13.411488    1603 start.go:369] acquired machines lock for "addons-975000" in 66.875µs
	I0213 14:40:13.411501    1603 start.go:93] Provisioning new machine with config: &{Name:addons-975000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:addons-975000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 14:40:13.411544    1603 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 14:40:13.418835    1603 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0213 14:40:15.472730    1603 start.go:159] libmachine.API.Create for "addons-975000" (driver="qemu2")
	I0213 14:40:15.472775    1603 client.go:168] LocalClient.Create starting
	I0213 14:40:15.473019    1603 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 14:40:15.599774    1603 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 14:40:15.757971    1603 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 14:40:16.063445    1603 main.go:141] libmachine: Creating SSH key...
	I0213 14:40:16.151107    1603 main.go:141] libmachine: Creating Disk image...
	I0213 14:40:16.151113    1603 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 14:40:16.151352    1603 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/disk.qcow2
	I0213 14:40:16.304477    1603 main.go:141] libmachine: STDOUT: 
	I0213 14:40:16.304511    1603 main.go:141] libmachine: STDERR: 
	I0213 14:40:16.304596    1603 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/disk.qcow2 +20000M
	I0213 14:40:16.323282    1603 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 14:40:16.323319    1603 main.go:141] libmachine: STDERR: 
	I0213 14:40:16.323333    1603 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/disk.qcow2
	I0213 14:40:16.323342    1603 main.go:141] libmachine: Starting QEMU VM...
	I0213 14:40:16.323393    1603 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:10:11:d4:80:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/disk.qcow2
	I0213 14:40:16.386942    1603 main.go:141] libmachine: STDOUT: 
	I0213 14:40:16.386973    1603 main.go:141] libmachine: STDERR: 
	I0213 14:40:16.386977    1603 main.go:141] libmachine: Attempt 0
	I0213 14:40:16.386989    1603 main.go:141] libmachine: Searching for 22:10:11:d4:80:6e in /var/db/dhcpd_leases ...
	I0213 14:40:16.387042    1603 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0213 14:40:16.387064    1603 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65cd3fa6}
	I0213 14:40:18.387341    1603 main.go:141] libmachine: Attempt 1
	I0213 14:40:18.387473    1603 main.go:141] libmachine: Searching for 22:10:11:d4:80:6e in /var/db/dhcpd_leases ...
	I0213 14:40:18.387710    1603 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0213 14:40:18.387787    1603 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65cd3fa6}
	I0213 14:40:20.388662    1603 main.go:141] libmachine: Attempt 2
	I0213 14:40:20.388744    1603 main.go:141] libmachine: Searching for 22:10:11:d4:80:6e in /var/db/dhcpd_leases ...
	I0213 14:40:20.389033    1603 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0213 14:40:20.389084    1603 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65cd3fa6}
	I0213 14:40:22.389957    1603 main.go:141] libmachine: Attempt 3
	I0213 14:40:22.389983    1603 main.go:141] libmachine: Searching for 22:10:11:d4:80:6e in /var/db/dhcpd_leases ...
	I0213 14:40:22.390024    1603 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0213 14:40:22.390058    1603 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65cd3fa6}
	I0213 14:40:24.392055    1603 main.go:141] libmachine: Attempt 4
	I0213 14:40:24.392069    1603 main.go:141] libmachine: Searching for 22:10:11:d4:80:6e in /var/db/dhcpd_leases ...
	I0213 14:40:24.392108    1603 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0213 14:40:24.392114    1603 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65cd3fa6}
	I0213 14:40:26.394110    1603 main.go:141] libmachine: Attempt 5
	I0213 14:40:26.394119    1603 main.go:141] libmachine: Searching for 22:10:11:d4:80:6e in /var/db/dhcpd_leases ...
	I0213 14:40:26.394153    1603 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0213 14:40:26.394178    1603 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65cd3fa6}
	I0213 14:40:28.396207    1603 main.go:141] libmachine: Attempt 6
	I0213 14:40:28.396223    1603 main.go:141] libmachine: Searching for 22:10:11:d4:80:6e in /var/db/dhcpd_leases ...
	I0213 14:40:28.396302    1603 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0213 14:40:28.396312    1603 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65cd3fa6}
	I0213 14:40:30.398351    1603 main.go:141] libmachine: Attempt 7
	I0213 14:40:30.398384    1603 main.go:141] libmachine: Searching for 22:10:11:d4:80:6e in /var/db/dhcpd_leases ...
	I0213 14:40:30.398473    1603 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0213 14:40:30.398484    1603 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:22:10:11:d4:80:6e ID:1,22:10:11:d4:80:6e Lease:0x65cd415d}
	I0213 14:40:30.398488    1603 main.go:141] libmachine: Found match: 22:10:11:d4:80:6e
	I0213 14:40:30.398498    1603 main.go:141] libmachine: IP: 192.168.105.2
	I0213 14:40:30.398502    1603 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0213 14:40:31.417670    1603 machine.go:88] provisioning docker machine ...
	I0213 14:40:31.417718    1603 buildroot.go:166] provisioning hostname "addons-975000"
	I0213 14:40:31.419156    1603 main.go:141] libmachine: Using SSH client type: native
	I0213 14:40:31.419913    1603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b938e0] 0x104b96050 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0213 14:40:31.419935    1603 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-975000 && echo "addons-975000" | sudo tee /etc/hostname
	I0213 14:40:31.510242    1603 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-975000
	
	I0213 14:40:31.510367    1603 main.go:141] libmachine: Using SSH client type: native
	I0213 14:40:31.510863    1603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b938e0] 0x104b96050 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0213 14:40:31.510878    1603 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-975000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-975000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-975000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 14:40:31.582183    1603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 14:40:31.582204    1603 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18170-979/.minikube CaCertPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18170-979/.minikube}
	I0213 14:40:31.582216    1603 buildroot.go:174] setting up certificates
	I0213 14:40:31.582236    1603 provision.go:83] configureAuth start
	I0213 14:40:31.582255    1603 provision.go:138] copyHostCerts
	I0213 14:40:31.582438    1603 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18170-979/.minikube/ca.pem (1078 bytes)
	I0213 14:40:31.582783    1603 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18170-979/.minikube/cert.pem (1123 bytes)
	I0213 14:40:31.582971    1603 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18170-979/.minikube/key.pem (1675 bytes)
	I0213 14:40:31.583102    1603 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18170-979/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca-key.pem org=jenkins.addons-975000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-975000]
	I0213 14:40:31.676392    1603 provision.go:172] copyRemoteCerts
	I0213 14:40:31.676452    1603 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 14:40:31.676470    1603 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/id_rsa Username:docker}
	I0213 14:40:31.711289    1603 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 14:40:31.718311    1603 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0213 14:40:31.724849    1603 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 14:40:31.731940    1603 provision.go:86] duration metric: configureAuth took 149.699875ms
	I0213 14:40:31.731951    1603 buildroot.go:189] setting minikube options for container-runtime
	I0213 14:40:31.732047    1603 config.go:182] Loaded profile config "addons-975000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 14:40:31.732079    1603 main.go:141] libmachine: Using SSH client type: native
	I0213 14:40:31.732292    1603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b938e0] 0x104b96050 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0213 14:40:31.732296    1603 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0213 14:40:31.793673    1603 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0213 14:40:31.793681    1603 buildroot.go:70] root file system type: tmpfs
	I0213 14:40:31.793735    1603 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0213 14:40:31.793786    1603 main.go:141] libmachine: Using SSH client type: native
	I0213 14:40:31.794024    1603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b938e0] 0x104b96050 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0213 14:40:31.794058    1603 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0213 14:40:31.860495    1603 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0213 14:40:31.860542    1603 main.go:141] libmachine: Using SSH client type: native
	I0213 14:40:31.860824    1603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b938e0] 0x104b96050 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0213 14:40:31.860839    1603 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0213 14:40:32.216339    1603 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0213 14:40:32.216353    1603 machine.go:91] provisioned docker machine in 798.675542ms
	I0213 14:40:32.216359    1603 client.go:171] LocalClient.Create took 16.743981208s
	I0213 14:40:32.216370    1603 start.go:167] duration metric: libmachine.API.Create for "addons-975000" took 16.744050666s
	I0213 14:40:32.216374    1603 start.go:300] post-start starting for "addons-975000" (driver="qemu2")
	I0213 14:40:32.216380    1603 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 14:40:32.216443    1603 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 14:40:32.216452    1603 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/id_rsa Username:docker}
	I0213 14:40:32.249773    1603 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 14:40:32.251189    1603 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 14:40:32.251197    1603 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18170-979/.minikube/addons for local assets ...
	I0213 14:40:32.251275    1603 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18170-979/.minikube/files for local assets ...
	I0213 14:40:32.251304    1603 start.go:303] post-start completed in 34.927708ms
	I0213 14:40:32.251671    1603 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/config.json ...
	I0213 14:40:32.251858    1603 start.go:128] duration metric: createHost completed in 18.840763375s
	I0213 14:40:32.251888    1603 main.go:141] libmachine: Using SSH client type: native
	I0213 14:40:32.252106    1603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b938e0] 0x104b96050 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0213 14:40:32.252111    1603 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 14:40:32.310348    1603 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707864031.814966168
	
	I0213 14:40:32.310354    1603 fix.go:206] guest clock: 1707864031.814966168
	I0213 14:40:32.310358    1603 fix.go:219] Guest: 2024-02-13 14:40:31.814966168 -0800 PST Remote: 2024-02-13 14:40:32.25186 -0800 PST m=+18.958304834 (delta=-436.893832ms)
	I0213 14:40:32.310369    1603 fix.go:190] guest clock delta is within tolerance: -436.893832ms
	I0213 14:40:32.310375    1603 start.go:83] releasing machines lock for "addons-975000", held for 18.899336333s
	I0213 14:40:32.310642    1603 ssh_runner.go:195] Run: cat /version.json
	I0213 14:40:32.310651    1603 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/id_rsa Username:docker}
	I0213 14:40:32.310667    1603 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 14:40:32.310701    1603 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/id_rsa Username:docker}
	I0213 14:40:32.343765    1603 ssh_runner.go:195] Run: systemctl --version
	I0213 14:40:32.385938    1603 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 14:40:32.387896    1603 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 14:40:32.387921    1603 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 14:40:32.393526    1603 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 14:40:32.393533    1603 start.go:475] detecting cgroup driver to use...
	I0213 14:40:32.393652    1603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 14:40:32.399747    1603 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0213 14:40:32.402959    1603 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 14:40:32.405782    1603 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 14:40:32.405806    1603 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 14:40:32.408505    1603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 14:40:32.411378    1603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 14:40:32.414320    1603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 14:40:32.417027    1603 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 14:40:32.420143    1603 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 14:40:32.423603    1603 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 14:40:32.426510    1603 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 14:40:32.429051    1603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 14:40:32.506160    1603 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 14:40:32.514408    1603 start.go:475] detecting cgroup driver to use...
	I0213 14:40:32.514461    1603 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 14:40:32.521118    1603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 14:40:32.525897    1603 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 14:40:32.531587    1603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 14:40:32.536303    1603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 14:40:32.541212    1603 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0213 14:40:32.583529    1603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 14:40:32.588976    1603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 14:40:32.594465    1603 ssh_runner.go:195] Run: which cri-dockerd
	I0213 14:40:32.595620    1603 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 14:40:32.598333    1603 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 14:40:32.603237    1603 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 14:40:32.687931    1603 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 14:40:32.767148    1603 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 14:40:32.767218    1603 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 14:40:32.772516    1603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 14:40:32.856303    1603 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 14:40:34.013817    1603 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.157525875s)
	I0213 14:40:34.013878    1603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0213 14:40:34.018404    1603 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0213 14:40:34.024819    1603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 14:40:34.029780    1603 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0213 14:40:34.115182    1603 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0213 14:40:34.195485    1603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 14:40:34.274909    1603 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0213 14:40:34.280936    1603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 14:40:34.285507    1603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 14:40:34.374027    1603 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0213 14:40:34.396275    1603 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0213 14:40:34.396382    1603 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0213 14:40:34.399492    1603 start.go:543] Will wait 60s for crictl version
	I0213 14:40:34.399540    1603 ssh_runner.go:195] Run: which crictl
	I0213 14:40:34.400741    1603 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 14:40:34.419124    1603 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0213 14:40:34.419217    1603 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 14:40:34.429227    1603 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 14:40:34.440653    1603 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0213 14:40:34.440780    1603 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0213 14:40:34.442200    1603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 14:40:34.445849    1603 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 14:40:34.445889    1603 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 14:40:34.451038    1603 docker.go:685] Got preloaded images: 
	I0213 14:40:34.451045    1603 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0213 14:40:34.451082    1603 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 14:40:34.454278    1603 ssh_runner.go:195] Run: which lz4
	I0213 14:40:34.455608    1603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 14:40:34.456992    1603 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 14:40:34.457003    1603 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (357941720 bytes)
	I0213 14:40:35.784454    1603 docker.go:649] Took 1.328882 seconds to copy over tarball
	I0213 14:40:35.784507    1603 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 14:40:36.849108    1603 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.064613292s)
	I0213 14:40:36.849123    1603 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 14:40:36.864593    1603 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 14:40:36.867995    1603 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0213 14:40:36.873735    1603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 14:40:36.950784    1603 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 14:40:38.988361    1603 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.037609667s)
	I0213 14:40:38.988448    1603 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 14:40:38.994639    1603 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0213 14:40:38.994649    1603 cache_images.go:84] Images are preloaded, skipping loading
	I0213 14:40:38.994717    1603 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 14:40:39.002281    1603 cni.go:84] Creating CNI manager for ""
	I0213 14:40:39.002292    1603 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 14:40:39.002319    1603 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 14:40:39.002329    1603 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-975000 NodeName:addons-975000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 14:40:39.002394    1603 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-975000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 14:40:39.002435    1603 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-975000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-975000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 14:40:39.002493    1603 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 14:40:39.005365    1603 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 14:40:39.005402    1603 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 14:40:39.008130    1603 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0213 14:40:39.013330    1603 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 14:40:39.018238    1603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0213 14:40:39.023319    1603 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0213 14:40:39.024625    1603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 14:40:39.028301    1603 certs.go:56] Setting up /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000 for IP: 192.168.105.2
	I0213 14:40:39.028310    1603 certs.go:190] acquiring lock for shared ca certs: {Name:mk65e421691b8fb2c09fb65e08f20f9a769da9f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:40:39.028456    1603 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/18170-979/.minikube/ca.key
	I0213 14:40:39.286608    1603 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt ...
	I0213 14:40:39.286630    1603 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt: {Name:mk44c5f4135d4b7ba6f2637363c31eb2c1f9d9f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:40:39.286961    1603 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18170-979/.minikube/ca.key ...
	I0213 14:40:39.286966    1603 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/ca.key: {Name:mk359cb6d0c84be3a211f25ef462e257c00706ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:40:39.287111    1603 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.key
	I0213 14:40:39.406929    1603 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.crt ...
	I0213 14:40:39.406934    1603 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.crt: {Name:mkf07a85f5d2142617c78208bb0ee6b0509e28a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:40:39.407136    1603 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.key ...
	I0213 14:40:39.407139    1603 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.key: {Name:mk870711db8e5e7c42b891629413d3488c536dc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:40:39.407314    1603 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.key
	I0213 14:40:39.407320    1603 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt with IP's: []
	I0213 14:40:39.458866    1603 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt ...
	I0213 14:40:39.458870    1603 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt: {Name:mkad6a6306633663a41ef44617116c93d3ed7523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:40:39.459025    1603 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.key ...
	I0213 14:40:39.459028    1603 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.key: {Name:mk275886f19d572eac0ce138e145d8a748d3d1d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:40:39.459135    1603 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/apiserver.key.96055969
	I0213 14:40:39.459144    1603 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0213 14:40:39.551387    1603 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/apiserver.crt.96055969 ...
	I0213 14:40:39.551394    1603 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/apiserver.crt.96055969: {Name:mk16dd9ecff959828e164fb3d800b7704bedfd75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:40:39.551597    1603 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/apiserver.key.96055969 ...
	I0213 14:40:39.551602    1603 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/apiserver.key.96055969: {Name:mk9869fd91045144ae825fd2140e1416494b98a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:40:39.551748    1603 certs.go:337] copying /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/apiserver.crt
	I0213 14:40:39.551936    1603 certs.go:341] copying /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/apiserver.key
	I0213 14:40:39.552049    1603 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/proxy-client.key
	I0213 14:40:39.552073    1603 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/proxy-client.crt with IP's: []
	I0213 14:40:39.679264    1603 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/proxy-client.crt ...
	I0213 14:40:39.679269    1603 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/proxy-client.crt: {Name:mkb22ac8d4eb40f6d602ccbe9864a1ca58f8eaae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:40:39.679466    1603 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/proxy-client.key ...
	I0213 14:40:39.679470    1603 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/proxy-client.key: {Name:mkdac3a30dc064bcd637f9b4cd1eee7f9a812e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:40:39.679725    1603 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 14:40:39.679750    1603 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem (1078 bytes)
	I0213 14:40:39.679772    1603 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem (1123 bytes)
	I0213 14:40:39.679789    1603 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/key.pem (1675 bytes)
	I0213 14:40:39.680196    1603 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 14:40:39.688215    1603 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0213 14:40:39.695320    1603 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 14:40:39.702250    1603 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0213 14:40:39.708902    1603 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 14:40:39.715708    1603 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 14:40:39.723033    1603 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 14:40:39.730273    1603 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0213 14:40:39.736879    1603 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 14:40:39.743829    1603 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 14:40:39.749110    1603 ssh_runner.go:195] Run: openssl version
	I0213 14:40:39.750894    1603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 14:40:39.753816    1603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 14:40:39.755098    1603 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13  2024 /usr/share/ca-certificates/minikubeCA.pem
	I0213 14:40:39.755118    1603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 14:40:39.756776    1603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 14:40:39.760024    1603 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 14:40:39.761378    1603 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0213 14:40:39.761416    1603 kubeadm.go:404] StartCluster: {Name:addons-975000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:addons-975000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 14:40:39.761476    1603 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 14:40:39.766858    1603 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 14:40:39.769808    1603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 14:40:39.772376    1603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 14:40:39.775398    1603 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 14:40:39.775410    1603 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 14:40:39.799329    1603 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0213 14:40:39.799360    1603 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 14:40:39.859520    1603 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 14:40:39.859573    1603 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 14:40:39.859647    1603 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 14:40:39.961748    1603 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 14:40:39.975816    1603 out.go:204]   - Generating certificates and keys ...
	I0213 14:40:39.975848    1603 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 14:40:39.975882    1603 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 14:40:40.019822    1603 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0213 14:40:40.248711    1603 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0213 14:40:40.324933    1603 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0213 14:40:40.494393    1603 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0213 14:40:40.661276    1603 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0213 14:40:40.661350    1603 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-975000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0213 14:40:40.794717    1603 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0213 14:40:40.794791    1603 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-975000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0213 14:40:40.899688    1603 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0213 14:40:41.002096    1603 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0213 14:40:41.077740    1603 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0213 14:40:41.077770    1603 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 14:40:41.162242    1603 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 14:40:41.268920    1603 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 14:40:41.340083    1603 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 14:40:41.400268    1603 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 14:40:41.400483    1603 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 14:40:41.401696    1603 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 14:40:41.412914    1603 out.go:204]   - Booting up control plane ...
	I0213 14:40:41.412965    1603 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 14:40:41.413009    1603 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 14:40:41.413045    1603 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 14:40:41.413098    1603 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 14:40:41.413142    1603 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 14:40:41.413168    1603 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 14:40:41.500347    1603 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 14:40:45.001055    1603 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.500841 seconds
	I0213 14:40:45.001116    1603 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 14:40:45.006207    1603 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 14:40:45.515672    1603 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 14:40:45.515774    1603 kubeadm.go:322] [mark-control-plane] Marking the node addons-975000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 14:40:46.021501    1603 kubeadm.go:322] [bootstrap-token] Using token: y0vnwc.b3g84orokdvqec1r
	I0213 14:40:46.028066    1603 out.go:204]   - Configuring RBAC rules ...
	I0213 14:40:46.028121    1603 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 14:40:46.031663    1603 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 14:40:46.034265    1603 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 14:40:46.035494    1603 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 14:40:46.036537    1603 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 14:40:46.037697    1603 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 14:40:46.041929    1603 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 14:40:46.201952    1603 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 14:40:46.434103    1603 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 14:40:46.434489    1603 kubeadm.go:322] 
	I0213 14:40:46.434523    1603 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 14:40:46.434526    1603 kubeadm.go:322] 
	I0213 14:40:46.434560    1603 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 14:40:46.434566    1603 kubeadm.go:322] 
	I0213 14:40:46.434579    1603 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 14:40:46.434612    1603 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 14:40:46.434636    1603 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 14:40:46.434641    1603 kubeadm.go:322] 
	I0213 14:40:46.434665    1603 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 14:40:46.434667    1603 kubeadm.go:322] 
	I0213 14:40:46.434691    1603 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 14:40:46.434695    1603 kubeadm.go:322] 
	I0213 14:40:46.434718    1603 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 14:40:46.434755    1603 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 14:40:46.434791    1603 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 14:40:46.434796    1603 kubeadm.go:322] 
	I0213 14:40:46.434845    1603 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 14:40:46.434898    1603 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 14:40:46.434902    1603 kubeadm.go:322] 
	I0213 14:40:46.434952    1603 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token y0vnwc.b3g84orokdvqec1r \
	I0213 14:40:46.435009    1603 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59d33d1eb40ab9fa290fa08077c20062eca217e8d74d3cd8b9b4fd2d5d6aeb8d \
	I0213 14:40:46.435022    1603 kubeadm.go:322] 	--control-plane 
	I0213 14:40:46.435027    1603 kubeadm.go:322] 
	I0213 14:40:46.435069    1603 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 14:40:46.435073    1603 kubeadm.go:322] 
	I0213 14:40:46.435124    1603 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token y0vnwc.b3g84orokdvqec1r \
	I0213 14:40:46.435174    1603 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59d33d1eb40ab9fa290fa08077c20062eca217e8d74d3cd8b9b4fd2d5d6aeb8d 
	I0213 14:40:46.435248    1603 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 14:40:46.435257    1603 cni.go:84] Creating CNI manager for ""
	I0213 14:40:46.435265    1603 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 14:40:46.443091    1603 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 14:40:46.449247    1603 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 14:40:46.452335    1603 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 14:40:46.457338    1603 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 14:40:46.457388    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:46.457401    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=fb52fe04bc8b044b129ef2ff27607d20a9fceb93 minikube.k8s.io/name=addons-975000 minikube.k8s.io/updated_at=2024_02_13T14_40_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:46.464596    1603 ops.go:34] apiserver oom_adj: -16
	I0213 14:40:46.520368    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:47.022451    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:47.522375    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:48.022446    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:48.522372    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:49.022421    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:49.522317    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:50.022362    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:50.522383    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:51.022310    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:51.522364    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:52.022315    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:52.522265    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:53.022239    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:53.522264    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:54.022285    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:54.522225    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:55.022205    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:55.522276    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:56.022173    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:56.522157    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:57.022217    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:57.522127    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:58.022156    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:58.522103    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:59.021320    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:59.522136    1603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:40:59.554234    1603 kubeadm.go:1088] duration metric: took 13.097200542s to wait for elevateKubeSystemPrivileges.
	I0213 14:40:59.554251    1603 kubeadm.go:406] StartCluster complete in 19.793312709s
	I0213 14:40:59.554260    1603 settings.go:142] acquiring lock: {Name:mkdd6397441cfaf6d06a74b65d6ddefdb863237c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:40:59.554412    1603 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 14:40:59.554629    1603 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/kubeconfig: {Name:mkf66d96abab1e512e6f2721c341e70e5b11c9ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:40:59.554854    1603 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 14:40:59.554982    1603 config.go:182] Loaded profile config "addons-975000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 14:40:59.554989    1603 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0213 14:40:59.555039    1603 addons.go:69] Setting yakd=true in profile "addons-975000"
	I0213 14:40:59.555047    1603 addons.go:234] Setting addon yakd=true in "addons-975000"
	I0213 14:40:59.555050    1603 addons.go:69] Setting inspektor-gadget=true in profile "addons-975000"
	I0213 14:40:59.555054    1603 addons.go:234] Setting addon inspektor-gadget=true in "addons-975000"
	I0213 14:40:59.555081    1603 addons.go:69] Setting registry=true in profile "addons-975000"
	I0213 14:40:59.555094    1603 host.go:66] Checking if "addons-975000" exists ...
	I0213 14:40:59.555097    1603 addons.go:69] Setting cloud-spanner=true in profile "addons-975000"
	I0213 14:40:59.555104    1603 addons.go:234] Setting addon cloud-spanner=true in "addons-975000"
	I0213 14:40:59.555105    1603 addons.go:234] Setting addon registry=true in "addons-975000"
	I0213 14:40:59.555121    1603 host.go:66] Checking if "addons-975000" exists ...
	I0213 14:40:59.555160    1603 host.go:66] Checking if "addons-975000" exists ...
	I0213 14:40:59.555148    1603 addons.go:69] Setting metrics-server=true in profile "addons-975000"
	I0213 14:40:59.555170    1603 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-975000"
	I0213 14:40:59.555206    1603 addons.go:234] Setting addon metrics-server=true in "addons-975000"
	I0213 14:40:59.555213    1603 addons.go:69] Setting storage-provisioner=true in profile "addons-975000"
	I0213 14:40:59.555216    1603 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-975000"
	I0213 14:40:59.555223    1603 addons.go:234] Setting addon storage-provisioner=true in "addons-975000"
	I0213 14:40:59.555271    1603 host.go:66] Checking if "addons-975000" exists ...
	I0213 14:40:59.555276    1603 host.go:66] Checking if "addons-975000" exists ...
	I0213 14:40:59.555282    1603 host.go:66] Checking if "addons-975000" exists ...
	I0213 14:40:59.555383    1603 addons.go:69] Setting ingress=true in profile "addons-975000"
	I0213 14:40:59.555094    1603 host.go:66] Checking if "addons-975000" exists ...
	I0213 14:40:59.555160    1603 addons.go:69] Setting gcp-auth=true in profile "addons-975000"
	I0213 14:40:59.555406    1603 mustload.go:65] Loading cluster: addons-975000
	I0213 14:40:59.555534    1603 retry.go:31] will retry after 867.764426ms: connect: dial unix /Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/monitor: connect: connection refused
	I0213 14:40:59.555387    1603 addons.go:234] Setting addon ingress=true in "addons-975000"
	I0213 14:40:59.555587    1603 host.go:66] Checking if "addons-975000" exists ...
	I0213 14:40:59.555598    1603 config.go:182] Loaded profile config "addons-975000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 14:40:59.555607    1603 retry.go:31] will retry after 1.177759369s: connect: dial unix /Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/monitor: connect: connection refused
	I0213 14:40:59.555137    1603 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-975000"
	I0213 14:40:59.555619    1603 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-975000"
	I0213 14:40:59.555746    1603 retry.go:31] will retry after 1.345452146s: connect: dial unix /Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/monitor: connect: connection refused
	I0213 14:40:59.555367    1603 addons.go:69] Setting volumesnapshots=true in profile "addons-975000"
	I0213 14:40:59.555752    1603 addons.go:234] Setting addon volumesnapshots=true in "addons-975000"
	I0213 14:40:59.555765    1603 host.go:66] Checking if "addons-975000" exists ...
	I0213 14:40:59.555788    1603 retry.go:31] will retry after 588.765185ms: connect: dial unix /Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/monitor: connect: connection refused
	I0213 14:40:59.555804    1603 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-975000"
	I0213 14:40:59.555807    1603 addons.go:69] Setting ingress-dns=true in profile "addons-975000"
	I0213 14:40:59.555812    1603 addons.go:234] Setting addon ingress-dns=true in "addons-975000"
	I0213 14:40:59.555815    1603 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-975000"
	I0213 14:40:59.555823    1603 host.go:66] Checking if "addons-975000" exists ...
	I0213 14:40:59.555829    1603 addons.go:69] Setting default-storageclass=true in profile "addons-975000"
	I0213 14:40:59.555834    1603 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-975000"
	I0213 14:40:59.555856    1603 host.go:66] Checking if "addons-975000" exists ...
	I0213 14:40:59.555928    1603 retry.go:31] will retry after 762.079696ms: connect: dial unix /Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/monitor: connect: connection refused
	I0213 14:40:59.555914    1603 retry.go:31] will retry after 600.104264ms: connect: dial unix /Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/monitor: connect: connection refused
	I0213 14:40:59.555603    1603 retry.go:31] will retry after 896.742747ms: connect: dial unix /Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/monitor: connect: connection refused
	I0213 14:40:59.555967    1603 retry.go:31] will retry after 739.522584ms: connect: dial unix /Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/monitor: connect: connection refused
	I0213 14:40:59.555973    1603 retry.go:31] will retry after 1.48058599s: connect: dial unix /Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/monitor: connect: connection refused
	I0213 14:40:59.555804    1603 retry.go:31] will retry after 643.542262ms: connect: dial unix /Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/monitor: connect: connection refused
	I0213 14:40:59.556039    1603 retry.go:31] will retry after 1.24440146s: connect: dial unix /Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/monitor: connect: connection refused
	I0213 14:40:59.556187    1603 retry.go:31] will retry after 1.347427045s: connect: dial unix /Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/monitor: connect: connection refused
	I0213 14:40:59.560781    1603 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0213 14:40:59.570814    1603 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0213 14:40:59.566697    1603 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0213 14:40:59.575728    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0213 14:40:59.575741    1603 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/id_rsa Username:docker}
	I0213 14:40:59.575787    1603 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0213 14:40:59.575792    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0213 14:40:59.575796    1603 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/id_rsa Username:docker}
	I0213 14:40:59.595863    1603 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 14:40:59.680998    1603 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0213 14:40:59.681011    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0213 14:40:59.681019    1603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0213 14:40:59.696338    1603 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0213 14:40:59.696349    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0213 14:40:59.744600    1603 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0213 14:40:59.744611    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0213 14:40:59.763072    1603 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0213 14:40:59.763080    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0213 14:40:59.775736    1603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0213 14:41:00.060086    1603 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-975000" context rescaled to 1 replicas
	I0213 14:41:00.060108    1603 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 14:41:00.066727    1603 out.go:177] * Verifying Kubernetes components...
	I0213 14:41:00.073606    1603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 14:41:00.132476    1603 start.go:929] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0213 14:41:00.151562    1603 out.go:177]   - Using image docker.io/registry:2.8.3
	I0213 14:41:00.155625    1603 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0213 14:41:00.159737    1603 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0213 14:41:00.159748    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0213 14:41:00.159759    1603 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/id_rsa Username:docker}
	I0213 14:41:00.164545    1603 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0213 14:41:00.174601    1603 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0213 14:41:00.183568    1603 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0213 14:41:00.190469    1603 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0213 14:41:00.199529    1603 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0213 14:41:00.207594    1603 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0213 14:41:00.211586    1603 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0213 14:41:00.221654    1603 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0213 14:41:00.231570    1603 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0213 14:41:00.237591    1603 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0213 14:41:00.246598    1603 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0213 14:41:00.243691    1603 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0213 14:41:00.252636    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0213 14:41:00.252651    1603 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/id_rsa Username:docker}
	I0213 14:41:00.252673    1603 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0213 14:41:00.252678    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0213 14:41:00.252683    1603 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/id_rsa Username:docker}
	I0213 14:41:00.269047    1603 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0213 14:41:00.269057    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0213 14:41:00.296501    1603 addons.go:234] Setting addon default-storageclass=true in "addons-975000"
	I0213 14:41:00.296524    1603 host.go:66] Checking if "addons-975000" exists ...
	I0213 14:41:00.297251    1603 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 14:41:00.297257    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 14:41:00.297263    1603 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/id_rsa Username:docker}
	I0213 14:41:00.316069    1603 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0213 14:41:00.316080    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0213 14:41:00.318944    1603 host.go:66] Checking if "addons-975000" exists ...
	I0213 14:41:00.349057    1603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0213 14:41:00.357701    1603 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0213 14:41:00.357712    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0213 14:41:00.362888    1603 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0213 14:41:00.362897    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0213 14:41:00.373810    1603 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0213 14:41:00.373822    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0213 14:41:00.376436    1603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0213 14:41:00.378311    1603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 14:41:00.430556    1603 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 14:41:00.433585    1603 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 14:41:00.433593    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 14:41:00.433603    1603 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/id_rsa Username:docker}
	I0213 14:41:00.437495    1603 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0213 14:41:00.437507    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0213 14:41:00.450584    1603 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-975000 service yakd-dashboard -n yakd-dashboard
	
	I0213 14:41:00.444670    1603 node_ready.go:35] waiting up to 6m0s for node "addons-975000" to be "Ready" ...
	I0213 14:41:00.461609    1603 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.24.0
	I0213 14:41:00.467499    1603 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0213 14:41:00.467507    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0213 14:41:00.467099    1603 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0213 14:41:00.467544    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0213 14:41:00.467547    1603 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/id_rsa Username:docker}
	I0213 14:41:00.469490    1603 node_ready.go:49] node "addons-975000" has status "Ready":"True"
	I0213 14:41:00.469514    1603 node_ready.go:38] duration metric: took 14.063417ms waiting for node "addons-975000" to be "Ready" ...
	I0213 14:41:00.469519    1603 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 14:41:00.479960    1603 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4th54" in "kube-system" namespace to be "Ready" ...
	I0213 14:41:00.525986    1603 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0213 14:41:00.525997    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0213 14:41:00.649206    1603 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0213 14:41:00.649217    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0213 14:41:00.718307    1603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 14:41:00.741580    1603 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0213 14:41:00.744663    1603 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 14:41:00.744672    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 14:41:00.744684    1603 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/id_rsa Username:docker}
	I0213 14:41:00.783520    1603 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0213 14:41:00.783530    1603 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0213 14:41:00.783532    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0213 14:41:00.783535    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0213 14:41:00.834655    1603 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0213 14:41:00.824795    1603 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0213 14:41:00.838430    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0213 14:41:00.838486    1603 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0213 14:41:00.838492    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0213 14:41:00.838499    1603 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/id_rsa Username:docker}
	I0213 14:41:00.869342    1603 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 14:41:00.869352    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0213 14:41:00.902615    1603 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-975000"
	I0213 14:41:00.902635    1603 host.go:66] Checking if "addons-975000" exists ...
	I0213 14:41:00.907598    1603 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0213 14:41:00.910514    1603 out.go:177]   - Using image docker.io/busybox:stable
	I0213 14:41:00.914598    1603 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0213 14:41:00.914608    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0213 14:41:00.914618    1603 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/id_rsa Username:docker}
	I0213 14:41:00.915016    1603 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 14:41:00.915022    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 14:41:00.919590    1603 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0213 14:41:00.922621    1603 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0213 14:41:00.922629    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0213 14:41:00.922640    1603 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/id_rsa Username:docker}
	I0213 14:41:00.923060    1603 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0213 14:41:00.923067    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0213 14:41:00.923693    1603 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0213 14:41:00.923699    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0213 14:41:00.945987    1603 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0213 14:41:00.946000    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0213 14:41:00.950657    1603 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0213 14:41:00.950664    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0213 14:41:00.960653    1603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0213 14:41:00.962542    1603 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 14:41:00.962549    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 14:41:01.032365    1603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 14:41:01.041532    1603 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0213 14:41:01.047597    1603 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0213 14:41:01.047647    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0213 14:41:01.047660    1603 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/id_rsa Username:docker}
	I0213 14:41:01.074122    1603 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0213 14:41:01.074133    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0213 14:41:01.107314    1603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0213 14:41:01.126097    1603 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0213 14:41:01.126108    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0213 14:41:01.168207    1603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0213 14:41:01.188906    1603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0213 14:41:01.189475    1603 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0213 14:41:01.189482    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0213 14:41:01.211891    1603 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0213 14:41:01.211902    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0213 14:41:01.249039    1603 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0213 14:41:01.249051    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0213 14:41:01.257774    1603 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0213 14:41:01.257786    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0213 14:41:01.313372    1603 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0213 14:41:01.313384    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0213 14:41:01.326839    1603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0213 14:41:01.379823    1603 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0213 14:41:01.379835    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0213 14:41:01.419648    1603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0213 14:41:01.483842    1603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.134789417s)
	I0213 14:41:01.483860    1603 addons.go:470] Verifying addon registry=true in "addons-975000"
	I0213 14:41:01.491387    1603 out.go:177] * Verifying registry addon...
	I0213 14:41:01.498813    1603 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0213 14:41:01.504743    1603 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0213 14:41:01.504754    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:01.983736    1603 pod_ready.go:97] error getting pod "coredns-5dd5756b68-4th54" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-4th54" not found
	I0213 14:41:01.983749    1603 pod_ready.go:81] duration metric: took 1.503813875s waiting for pod "coredns-5dd5756b68-4th54" in "kube-system" namespace to be "Ready" ...
	E0213 14:41:01.983755    1603 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-4th54" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-4th54" not found
	I0213 14:41:01.983759    1603 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-w58vd" in "kube-system" namespace to be "Ready" ...
	I0213 14:41:01.987091    1603 pod_ready.go:92] pod "coredns-5dd5756b68-w58vd" in "kube-system" namespace has status "Ready":"True"
	I0213 14:41:01.987099    1603 pod_ready.go:81] duration metric: took 3.336791ms waiting for pod "coredns-5dd5756b68-w58vd" in "kube-system" namespace to be "Ready" ...
	I0213 14:41:01.987103    1603 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-975000" in "kube-system" namespace to be "Ready" ...
	I0213 14:41:01.989567    1603 pod_ready.go:92] pod "etcd-addons-975000" in "kube-system" namespace has status "Ready":"True"
	I0213 14:41:01.989573    1603 pod_ready.go:81] duration metric: took 2.467292ms waiting for pod "etcd-addons-975000" in "kube-system" namespace to be "Ready" ...
	I0213 14:41:01.989576    1603 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-975000" in "kube-system" namespace to be "Ready" ...
	I0213 14:41:01.991867    1603 pod_ready.go:92] pod "kube-apiserver-addons-975000" in "kube-system" namespace has status "Ready":"True"
	I0213 14:41:01.991876    1603 pod_ready.go:81] duration metric: took 2.295708ms waiting for pod "kube-apiserver-addons-975000" in "kube-system" namespace to be "Ready" ...
	I0213 14:41:01.991882    1603 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-975000" in "kube-system" namespace to be "Ready" ...
	I0213 14:41:01.998153    1603 pod_ready.go:92] pod "kube-controller-manager-addons-975000" in "kube-system" namespace has status "Ready":"True"
	I0213 14:41:01.998162    1603 pod_ready.go:81] duration metric: took 6.277459ms waiting for pod "kube-controller-manager-addons-975000" in "kube-system" namespace to be "Ready" ...
	I0213 14:41:01.998167    1603 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7bg7s" in "kube-system" namespace to be "Ready" ...
	I0213 14:41:02.000977    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:02.262735    1603 pod_ready.go:92] pod "kube-proxy-7bg7s" in "kube-system" namespace has status "Ready":"True"
	I0213 14:41:02.262746    1603 pod_ready.go:81] duration metric: took 264.582584ms waiting for pod "kube-proxy-7bg7s" in "kube-system" namespace to be "Ready" ...
	I0213 14:41:02.262752    1603 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-975000" in "kube-system" namespace to be "Ready" ...
	I0213 14:41:02.501766    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:02.669042    1603 pod_ready.go:92] pod "kube-scheduler-addons-975000" in "kube-system" namespace has status "Ready":"True"
	I0213 14:41:02.669054    1603 pod_ready.go:81] duration metric: took 406.307792ms waiting for pod "kube-scheduler-addons-975000" in "kube-system" namespace to be "Ready" ...
	I0213 14:41:02.669059    1603 pod_ready.go:38] duration metric: took 2.199586917s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 14:41:02.669068    1603 api_server.go:52] waiting for apiserver process to appear ...
	I0213 14:41:02.669136    1603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 14:41:03.002170    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:03.199952    1603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.821693709s)
	I0213 14:41:03.200000    1603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (2.8236225s)
	I0213 14:41:03.200008    1603 addons.go:470] Verifying addon ingress=true in "addons-975000"
	I0213 14:41:03.205230    1603 out.go:177] * Verifying ingress addon...
	I0213 14:41:03.200074    1603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.481816416s)
	I0213 14:41:03.214579    1603 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0213 14:41:03.216327    1603 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0213 14:41:03.216333    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:03.502746    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:03.678249    1603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.717633875s)
	I0213 14:41:03.678270    1603 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-975000"
	I0213 14:41:03.678279    1603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.571016875s)
	I0213 14:41:03.681107    1603 out.go:177] * Verifying csi-hostpath-driver addon...
	I0213 14:41:03.678330    1603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.510169834s)
	I0213 14:41:03.678255    1603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.645937541s)
	I0213 14:41:03.681129    1603 addons.go:470] Verifying addon metrics-server=true in "addons-975000"
	I0213 14:41:03.678362    1603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (2.4895065s)
	I0213 14:41:03.678388    1603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (2.351593958s)
	I0213 14:41:03.678424    1603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.258821583s)
	I0213 14:41:03.678441    1603 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.009320792s)
	W0213 14:41:03.681228    1603 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0213 14:41:03.681240    1603 retry.go:31] will retry after 191.362671ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0213 14:41:03.681261    1603 api_server.go:72] duration metric: took 3.62121225s to wait for apiserver process to appear ...
	I0213 14:41:03.688478    1603 api_server.go:88] waiting for apiserver healthz status ...
	I0213 14:41:03.688493    1603 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0213 14:41:03.688883    1603 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0213 14:41:03.696391    1603 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0213 14:41:03.696400    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:03.701591    1603 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0213 14:41:03.702390    1603 api_server.go:141] control plane version: v1.28.4
	I0213 14:41:03.702397    1603 api_server.go:131] duration metric: took 13.913917ms to wait for apiserver health ...
	I0213 14:41:03.702401    1603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 14:41:03.712637    1603 system_pods.go:59] 17 kube-system pods found
	I0213 14:41:03.712651    1603 system_pods.go:61] "coredns-5dd5756b68-w58vd" [d0578c6e-5f09-4af3-87ec-fae95337930a] Running
	I0213 14:41:03.712656    1603 system_pods.go:61] "csi-hostpath-attacher-0" [28e012f4-3715-4433-9744-8d2f913e33fa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0213 14:41:03.712659    1603 system_pods.go:61] "csi-hostpath-resizer-0" [ff7ff38e-e5b0-4cd3-8bcd-ac139d4e6ea1] Pending
	I0213 14:41:03.712662    1603 system_pods.go:61] "csi-hostpathplugin-mpzv4" [3d2cc4a1-9a77-49a2-9d05-aecf0d6ac7e4] Pending
	I0213 14:41:03.712664    1603 system_pods.go:61] "etcd-addons-975000" [c116e331-fdfb-4faf-a6e0-73498f7bde8f] Running
	I0213 14:41:03.712666    1603 system_pods.go:61] "kube-apiserver-addons-975000" [2d4f6998-5817-4df9-a5c7-6f16d8b2ad3c] Running
	I0213 14:41:03.712668    1603 system_pods.go:61] "kube-controller-manager-addons-975000" [67ba2cf6-0d48-4b89-b3e6-7f48c4fc628c] Running
	I0213 14:41:03.712671    1603 system_pods.go:61] "kube-ingress-dns-minikube" [549cdf17-10db-4eaf-a173-d54df289ac11] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0213 14:41:03.712674    1603 system_pods.go:61] "kube-proxy-7bg7s" [73c57b15-fa40-4230-9c2d-27d327b881a8] Running
	I0213 14:41:03.712676    1603 system_pods.go:61] "kube-scheduler-addons-975000" [538d4642-fffb-4fda-a5af-258edb685eb7] Running
	I0213 14:41:03.712679    1603 system_pods.go:61] "metrics-server-69cf46c98-jvnzm" [675f5d76-e24a-43d5-9312-34d0e1ece10e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 14:41:03.712682    1603 system_pods.go:61] "nvidia-device-plugin-daemonset-m85cq" [24ef4152-2e8a-4d81-8c22-3ada9c124d45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0213 14:41:03.712685    1603 system_pods.go:61] "registry-proxy-dcfkf" [bb9bcd40-a4eb-481d-8ed1-556b77cb39c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0213 14:41:03.712689    1603 system_pods.go:61] "registry-wqg2d" [81878e2a-d1ef-4326-86c8-ad5b59db464e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0213 14:41:03.712692    1603 system_pods.go:61] "snapshot-controller-58dbcc7b99-6gws9" [4a97ec5e-d461-4f28-a31b-f353023ae674] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0213 14:41:03.712695    1603 system_pods.go:61] "snapshot-controller-58dbcc7b99-98rmk" [a8142d2d-fdde-4ab9-bf57-9ad54601aa4b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0213 14:41:03.712698    1603 system_pods.go:61] "storage-provisioner" [d3f6bb0f-100b-4e7d-aeb1-b4b8f416381d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0213 14:41:03.712701    1603 system_pods.go:74] duration metric: took 10.296709ms to wait for pod list to return data ...
	I0213 14:41:03.712707    1603 default_sa.go:34] waiting for default service account to be created ...
	I0213 14:41:03.726686    1603 default_sa.go:45] found service account: "default"
	I0213 14:41:03.726699    1603 default_sa.go:55] duration metric: took 13.988292ms for default service account to be created ...
	I0213 14:41:03.726705    1603 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 14:41:03.728011    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:03.734541    1603 system_pods.go:86] 17 kube-system pods found
	I0213 14:41:03.734559    1603 system_pods.go:89] "coredns-5dd5756b68-w58vd" [d0578c6e-5f09-4af3-87ec-fae95337930a] Running
	I0213 14:41:03.734564    1603 system_pods.go:89] "csi-hostpath-attacher-0" [28e012f4-3715-4433-9744-8d2f913e33fa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0213 14:41:03.734568    1603 system_pods.go:89] "csi-hostpath-resizer-0" [ff7ff38e-e5b0-4cd3-8bcd-ac139d4e6ea1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0213 14:41:03.734572    1603 system_pods.go:89] "csi-hostpathplugin-mpzv4" [3d2cc4a1-9a77-49a2-9d05-aecf0d6ac7e4] Pending
	I0213 14:41:03.734574    1603 system_pods.go:89] "etcd-addons-975000" [c116e331-fdfb-4faf-a6e0-73498f7bde8f] Running
	I0213 14:41:03.734577    1603 system_pods.go:89] "kube-apiserver-addons-975000" [2d4f6998-5817-4df9-a5c7-6f16d8b2ad3c] Running
	I0213 14:41:03.734599    1603 system_pods.go:89] "kube-controller-manager-addons-975000" [67ba2cf6-0d48-4b89-b3e6-7f48c4fc628c] Running
	I0213 14:41:03.734604    1603 system_pods.go:89] "kube-ingress-dns-minikube" [549cdf17-10db-4eaf-a173-d54df289ac11] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0213 14:41:03.734608    1603 system_pods.go:89] "kube-proxy-7bg7s" [73c57b15-fa40-4230-9c2d-27d327b881a8] Running
	I0213 14:41:03.734612    1603 system_pods.go:89] "kube-scheduler-addons-975000" [538d4642-fffb-4fda-a5af-258edb685eb7] Running
	I0213 14:41:03.734620    1603 system_pods.go:89] "metrics-server-69cf46c98-jvnzm" [675f5d76-e24a-43d5-9312-34d0e1ece10e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 14:41:03.734624    1603 system_pods.go:89] "nvidia-device-plugin-daemonset-m85cq" [24ef4152-2e8a-4d81-8c22-3ada9c124d45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0213 14:41:03.734628    1603 system_pods.go:89] "registry-proxy-dcfkf" [bb9bcd40-a4eb-481d-8ed1-556b77cb39c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0213 14:41:03.734631    1603 system_pods.go:89] "registry-wqg2d" [81878e2a-d1ef-4326-86c8-ad5b59db464e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0213 14:41:03.734634    1603 system_pods.go:89] "snapshot-controller-58dbcc7b99-6gws9" [4a97ec5e-d461-4f28-a31b-f353023ae674] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0213 14:41:03.734639    1603 system_pods.go:89] "snapshot-controller-58dbcc7b99-98rmk" [a8142d2d-fdde-4ab9-bf57-9ad54601aa4b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0213 14:41:03.734641    1603 system_pods.go:89] "storage-provisioner" [d3f6bb0f-100b-4e7d-aeb1-b4b8f416381d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0213 14:41:03.734646    1603 system_pods.go:126] duration metric: took 7.938042ms to wait for k8s-apps to be running ...
	I0213 14:41:03.734657    1603 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 14:41:03.734711    1603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 14:41:03.743791    1603 system_svc.go:56] duration metric: took 9.129208ms WaitForService to wait for kubelet.
	I0213 14:41:03.743804    1603 kubeadm.go:581] duration metric: took 3.68377225s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 14:41:03.743817    1603 node_conditions.go:102] verifying NodePressure condition ...
	I0213 14:41:03.747261    1603 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0213 14:41:03.747271    1603 node_conditions.go:123] node cpu capacity is 2
	I0213 14:41:03.747277    1603 node_conditions.go:105] duration metric: took 3.457875ms to run NodePressure ...
	I0213 14:41:03.747283    1603 start.go:228] waiting for startup goroutines ...
	I0213 14:41:03.874717    1603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0213 14:41:04.003409    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:04.194545    1603 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0213 14:41:04.194554    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:04.239571    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:04.502035    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:04.693725    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:04.718480    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:05.002679    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:05.193578    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:05.218515    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:05.503151    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:05.693564    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:05.718890    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:06.002679    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:06.193517    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:06.218328    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:06.503129    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:06.693438    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:06.718382    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:06.924977    1603 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0213 14:41:06.924993    1603 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/id_rsa Username:docker}
	I0213 14:41:06.958371    1603 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0213 14:41:06.963081    1603 addons.go:234] Setting addon gcp-auth=true in "addons-975000"
	I0213 14:41:06.963101    1603 host.go:66] Checking if "addons-975000" exists ...
	I0213 14:41:06.963956    1603 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0213 14:41:06.963964    1603 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/addons-975000/id_rsa Username:docker}
	I0213 14:41:06.998292    1603 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0213 14:41:07.001312    1603 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0213 14:41:07.004324    1603 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0213 14:41:07.004329    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0213 14:41:07.002287    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:07.011572    1603 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0213 14:41:07.011579    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0213 14:41:07.016605    1603 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0213 14:41:07.016611    1603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0213 14:41:07.021647    1603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0213 14:41:07.193748    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:07.218975    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:07.305708    1603 addons.go:470] Verifying addon gcp-auth=true in "addons-975000"
	I0213 14:41:07.309397    1603 out.go:177] * Verifying gcp-auth addon...
	I0213 14:41:07.319633    1603 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0213 14:41:07.323912    1603 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0213 14:41:07.323920    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:07.502439    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:07.693640    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:07.718335    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:07.823188    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:08.002811    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:08.193684    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:08.218675    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:08.322134    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:08.502667    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:08.693368    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:08.718411    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:08.823115    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:09.002759    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:09.193246    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:09.218644    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:09.323200    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:09.503294    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:09.693284    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:09.718524    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:09.823772    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:10.002782    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:10.194215    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:10.218359    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:10.323256    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:10.502952    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:10.693330    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:10.718396    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:10.822322    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:11.002754    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:11.193273    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:11.218125    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:11.323042    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:11.502906    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:11.693136    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:11.718253    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:11.823196    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:12.002875    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:12.193321    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:12.218153    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:12.323337    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:12.502826    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:12.692345    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:12.718430    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:12.823125    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:13.002467    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:13.193163    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:13.218381    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:13.321936    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:13.503346    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:13.693105    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:13.716283    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:13.823132    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:14.002604    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:14.193857    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:14.218263    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:14.323674    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:14.503651    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:14.693268    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:14.718080    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:14.822533    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:15.002691    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:15.193319    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:15.218281    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:15.322803    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:15.502647    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:15.693475    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:15.716759    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:15.823031    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:16.003058    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:16.192181    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:16.218575    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:16.436434    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:16.502949    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:16.693394    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:16.718061    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:16.822905    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:17.002573    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:17.193382    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:17.218362    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:17.322871    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:17.502619    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:17.695305    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:17.719112    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:17.823199    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:18.002972    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:18.193571    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:18.218450    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:18.322257    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:18.503663    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:18.693124    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:18.718091    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:18.823958    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:19.003680    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:19.193325    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:19.218125    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:19.323228    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:19.502852    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:19.693920    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:19.718119    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:19.822965    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:20.002782    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:20.194069    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:20.221368    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:20.323201    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:20.502652    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:20.693400    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:20.717979    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:20.822966    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:21.002382    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:21.193268    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:21.218009    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:21.323253    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:21.502031    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:21.693237    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:21.717966    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:21.823038    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:22.002651    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:22.195097    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:22.218270    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:22.323242    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:22.502582    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:22.693906    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:22.718168    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:22.822819    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:23.001807    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:23.193212    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:23.217907    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:23.320792    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:23.502987    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:23.693086    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:23.717941    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:23.822752    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:24.002427    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:24.192872    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:24.217902    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:24.323007    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:24.502308    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:24.693203    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:24.718496    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:24.822864    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:25.002106    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:25.193385    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:25.217966    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:25.322765    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:25.502262    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:25.692790    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:25.717784    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:25.822965    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:26.002700    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:26.193385    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:26.218245    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:26.323240    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:26.502805    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:26.693277    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:26.717847    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:26.822618    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:27.002381    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:27.193251    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:27.218643    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:27.323336    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:27.502709    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:27.693564    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:27.718612    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:27.823059    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:28.003122    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:28.194048    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:28.218215    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:28.321455    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:28.502300    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:28.692994    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:28.718003    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:28.822775    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:29.002415    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:29.193148    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:29.218321    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:29.322897    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:29.502230    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:29.692397    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:29.718000    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:29.821644    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:30.002590    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:30.193250    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:30.217866    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:30.323028    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:30.502570    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:30.694526    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:30.718184    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:30.823198    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:31.002292    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:31.192911    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:31.217916    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:31.322866    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:31.502874    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 14:41:31.693118    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:31.718597    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:31.821530    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:32.002382    1603 kapi.go:107] duration metric: took 30.504304542s to wait for kubernetes.io/minikube-addons=registry ...
	I0213 14:41:32.192903    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:32.217942    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:32.323389    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:32.693176    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:32.717889    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:32.822742    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:33.193014    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:33.217973    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:33.321607    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:33.693679    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:33.718285    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:33.822674    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:34.191533    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:34.217769    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:34.322639    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:34.692693    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:34.717651    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:34.822668    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:35.193216    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:35.217561    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:35.320844    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:35.692741    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:35.715858    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:35.822409    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:36.193348    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:36.217576    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:36.323030    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:36.692866    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:36.717607    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:36.822275    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:37.192739    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:37.217942    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:37.322306    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:37.693243    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:37.717714    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:37.822538    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:38.192985    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:38.217678    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:38.322019    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:38.692703    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:38.716372    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:38.822515    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:39.192415    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:39.217870    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:39.322630    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:39.692830    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:39.717520    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:39.822432    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:40.192979    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:40.217635    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:40.322850    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:40.692791    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:40.717407    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:40.822601    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:41.192875    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:41.217475    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:41.322765    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:41.693119    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:41.718198    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:41.822474    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:42.192830    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:42.217683    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:42.322524    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:42.692779    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:42.717658    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:42.822514    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:43.192826    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:43.217823    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:43.321426    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:43.692757    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:43.717609    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:43.822583    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:44.192541    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:44.217583    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:44.322481    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:44.693034    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:44.717995    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:44.822638    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:45.192496    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:45.217620    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:45.322371    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:45.692506    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:45.717912    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:45.822488    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:46.192947    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:46.217414    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:46.322405    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:46.692750    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:46.717203    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:46.822189    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:47.192589    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:47.217493    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:47.322117    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:47.692436    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:47.717667    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:47.821819    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:48.192750    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:48.217639    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:48.322064    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:48.692848    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:48.717859    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:48.822542    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:49.192624    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:49.216316    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:49.322733    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:49.691704    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:49.719170    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:49.822269    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:50.192968    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:50.217340    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:50.322646    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:50.692621    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:50.717225    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:50.822274    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:51.193079    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:51.217408    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:51.321898    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:51.694277    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:51.717664    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:51.822264    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:52.192727    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:52.217546    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:52.322462    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:52.693918    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:52.717487    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:52.822265    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:53.191560    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:53.217298    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:53.321429    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:53.692556    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:53.717825    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:53.822374    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:54.192285    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:54.217322    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:54.322287    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:54.692336    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:54.717997    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:54.822261    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:55.193949    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:55.217449    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:55.322768    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:55.693545    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:55.716663    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:55.822854    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:56.192632    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:56.217249    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:56.322489    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:56.692490    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:56.719119    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:56.822130    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:57.192400    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:57.217993    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:57.322330    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:57.692521    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:57.716475    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:57.975657    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:58.193052    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:58.217848    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:58.321245    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:58.692734    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:58.717310    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:58.822219    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:59.192556    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:59.217096    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:59.321859    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:41:59.692589    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:41:59.717239    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:41:59.821977    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:00.192695    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:00.217084    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:00.322121    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:00.692652    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:00.716940    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:00.822118    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:01.193865    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:01.217784    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:01.322285    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:01.692534    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:01.717188    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:01.822827    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:02.192450    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:02.217218    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:02.322006    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:02.692314    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:02.717012    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:02.821942    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:03.192480    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:03.217106    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:03.321013    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:03.692379    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:03.717013    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:03.822167    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:04.192187    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:04.217123    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:04.321927    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:04.692619    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:04.717099    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:04.822407    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:05.192491    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:05.217519    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:05.322149    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:05.692377    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:05.717561    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:05.822009    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:06.192519    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:06.216748    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:06.322150    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:06.692364    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:06.717023    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:06.822077    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:07.192147    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:07.217123    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:07.322111    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:07.692105    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:07.716867    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:07.821612    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:08.191855    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:08.216909    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:08.321119    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:08.692629    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:08.717003    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:08.821798    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:09.192216    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:09.216945    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:09.321709    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:09.692152    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:09.714998    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:09.821611    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:10.191734    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:10.216964    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:10.321103    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:10.692098    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:10.717100    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:10.821614    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:11.192189    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:11.216741    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:11.321756    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:11.691793    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:11.716967    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:11.822006    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:12.191970    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:12.216789    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:12.321673    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:12.692118    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:12.717139    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:12.821747    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:13.192169    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:13.217200    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:13.320719    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:13.692310    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:13.717011    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:13.821728    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:14.192299    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:14.216875    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:14.323056    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:14.692606    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:14.716885    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:14.821443    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:15.191931    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:15.215503    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:15.321545    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:15.691564    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:15.716853    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:15.821833    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:16.191891    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:16.216895    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:16.321800    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:16.691926    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:16.716743    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:16.821425    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:17.192183    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:17.216669    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:17.321422    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:17.691749    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:17.716644    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:17.821522    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:18.191960    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:18.216947    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:18.320261    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:18.692550    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:18.716739    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:18.819848    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:19.192461    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:19.216702    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:19.321667    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:19.692343    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:19.716601    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:19.821384    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:20.191498    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:20.217295    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:20.321618    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:20.691834    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:20.716793    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:20.821719    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:21.191882    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:21.216855    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:21.321805    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:21.691665    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:21.717184    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:21.821673    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:22.191970    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:22.216484    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:22.321684    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:22.691710    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:22.716659    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:22.821528    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:23.191607    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:23.216520    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:23.320656    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:23.692374    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:23.716706    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:23.821321    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:24.191975    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:24.216546    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:24.321720    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:24.692104    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:24.716709    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:24.821391    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:25.191690    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:25.216995    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:25.321981    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:25.691826    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:25.715131    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:25.821803    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:26.191880    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:26.216798    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:26.321621    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:26.691406    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:26.716395    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:26.821274    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:27.191725    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:27.216425    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:27.321175    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:27.691486    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:27.716397    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:27.821173    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:28.191814    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:28.216533    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:28.320612    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:28.691739    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:28.716452    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:28.821423    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:29.191372    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:29.216757    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:29.321698    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:29.691623    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:29.716280    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:29.821175    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:30.191808    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:30.216789    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:30.321572    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:30.691532    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:30.716341    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:30.821552    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:31.191794    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:31.216411    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:31.320917    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:31.692135    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:31.716418    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:31.821527    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:32.191609    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:32.216181    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:32.321372    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:32.691333    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:32.716110    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:32.820978    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:33.191887    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:33.216155    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:33.320036    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:33.691526    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:33.716197    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:33.821392    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:34.191780    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:34.216375    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:34.320712    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:34.692049    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:34.716498    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:34.820994    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:35.193592    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:35.216280    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:35.320982    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:35.690078    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:35.714809    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:35.820861    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:36.191638    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:36.216548    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:36.321281    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:36.693177    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:36.716319    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:36.820891    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:37.191331    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:37.217007    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:37.320999    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:37.691595    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:37.716275    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:37.820789    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:38.191811    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:38.216587    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:38.320167    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:38.691653    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:38.730729    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:38.821230    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:39.191102    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:39.216489    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:39.321170    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:39.691602    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:39.716518    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:39.820972    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:40.191328    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:40.216135    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:40.321192    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:40.691529    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:40.716191    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:40.821099    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:41.190435    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:41.216040    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:41.321065    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:41.691649    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:41.716152    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:41.821013    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:42.192250    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:42.216416    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:42.321804    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:42.695581    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:42.717416    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:42.821613    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:43.191086    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:43.216019    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:43.320212    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:43.691444    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:43.716092    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:43.821084    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:44.191516    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 14:42:44.216041    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:44.321011    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:44.691930    1603 kapi.go:107] duration metric: took 1m41.005475125s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0213 14:42:44.716127    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:44.820739    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:45.214952    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:45.322939    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:45.716663    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:45.820958    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:46.216066    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:46.321019    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:46.716547    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:46.820681    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:47.216688    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:47.321165    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:47.716225    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:47.820742    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:48.216206    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:48.319974    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:48.716488    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:48.820984    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:49.216261    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:49.321882    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:49.716935    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:49.820559    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:50.216249    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:50.320985    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:50.716068    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:50.820742    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:51.216305    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:51.319266    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:51.716682    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:51.820740    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:52.214606    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:52.321000    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:52.716464    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:52.820621    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:53.214894    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:53.319450    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:53.717035    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:53.820582    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:54.214549    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:54.320835    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:54.716685    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:54.820654    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:55.216050    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:55.320507    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:55.716623    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:55.820573    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:56.214997    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:56.320859    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:56.716205    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:56.820524    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:57.214527    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:57.320989    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:57.715409    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:57.820474    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:58.216124    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:58.319645    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:58.716675    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:58.820395    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:59.215947    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:59.318735    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:42:59.716623    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:42:59.819762    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:00.214811    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:00.320800    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:00.716184    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:00.820629    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:01.215821    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:01.320472    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:01.716494    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:01.820243    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:02.215819    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:02.320763    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:02.714635    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:02.820365    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:03.214780    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:03.323629    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:03.715784    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:03.820438    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:04.215473    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:04.320660    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:04.716396    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:04.820264    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:05.216068    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:05.320371    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:05.716521    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:05.820267    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:06.215733    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:06.320707    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:06.715839    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:06.820259    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:07.216034    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:07.320820    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:07.715697    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:07.820231    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:08.215840    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:08.319523    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:08.716306    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:08.820306    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:09.214659    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:09.320303    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:09.716473    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:09.820210    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:10.215178    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:10.320560    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:10.716322    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:10.820733    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:11.216227    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:11.320447    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:11.716017    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:11.820167    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:12.215813    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:12.320540    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:12.715754    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:12.820304    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:13.215506    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:13.318861    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:13.715806    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:13.819292    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:14.215900    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:14.320354    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:14.716685    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:14.819402    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:15.215612    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:15.320168    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:15.716322    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:15.820048    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:16.215494    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:16.320392    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:16.715368    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:16.820101    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:17.215819    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:17.320053    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:17.715723    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:17.819876    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:18.215474    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:18.319565    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:18.715072    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:18.820056    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:19.215562    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:19.318095    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:19.716001    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:19.819935    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:20.215008    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:20.320068    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:20.715658    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:20.819855    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:21.214602    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:21.318483    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:21.716010    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:21.820110    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:22.215447    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:22.320399    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:22.716426    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:22.820119    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:23.215325    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:23.319670    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:23.715592    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:23.820053    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:24.214849    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:24.320364    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:24.716483    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:24.820008    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:25.214021    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:25.320026    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:25.715859    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:25.820413    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:26.213331    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:26.320161    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:26.715497    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:26.820152    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:27.215420    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:27.320224    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:27.758760    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:27.820102    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:28.214252    1603 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 14:43:28.374487    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:28.715429    1603 kapi.go:107] duration metric: took 2m25.504350375s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0213 14:43:28.820005    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:29.320114    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:29.820241    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:30.320285    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:30.819986    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:31.319763    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:31.820288    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:32.319879    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:32.820717    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:33.318380    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:33.819867    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:34.320211    1603 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 14:43:34.819890    1603 kapi.go:107] duration metric: took 2m27.503809917s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0213 14:43:34.823747    1603 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-975000 cluster.
	I0213 14:43:34.828706    1603 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0213 14:43:34.832717    1603 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0213 14:43:34.837782    1603 out.go:177] * Enabled addons: cloud-spanner, yakd, storage-provisioner, default-storageclass, ingress-dns, nvidia-device-plugin, metrics-server, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0213 14:43:34.841655    1603 addons.go:505] enable addons completed in 2m35.290427917s: enabled=[cloud-spanner yakd storage-provisioner default-storageclass ingress-dns nvidia-device-plugin metrics-server inspektor-gadget storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0213 14:43:34.841669    1603 start.go:233] waiting for cluster config update ...
	I0213 14:43:34.841677    1603 start.go:242] writing updated cluster config ...
	I0213 14:43:34.842050    1603 ssh_runner.go:195] Run: rm -f paused
	I0213 14:43:34.974382    1603 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0213 14:43:34.978801    1603 out.go:177] * Done! kubectl is now configured to use "addons-975000" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-02-13 22:40:28 UTC, ends at Tue 2024-02-13 22:44:42 UTC. --
	Feb 13 22:44:41 addons-975000 dockerd[1039]: time="2024-02-13T22:44:41.664073242Z" level=info msg="ignoring event" container=f33a63858d99b753bb4560d46d40af119881aa06a8eb44d935c107853d2e8973 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 13 22:44:41 addons-975000 dockerd[1045]: time="2024-02-13T22:44:41.664196574Z" level=info msg="shim disconnected" id=f33a63858d99b753bb4560d46d40af119881aa06a8eb44d935c107853d2e8973 namespace=moby
	Feb 13 22:44:41 addons-975000 dockerd[1045]: time="2024-02-13T22:44:41.664220699Z" level=warning msg="cleaning up after shim disconnected" id=f33a63858d99b753bb4560d46d40af119881aa06a8eb44d935c107853d2e8973 namespace=moby
	Feb 13 22:44:41 addons-975000 dockerd[1045]: time="2024-02-13T22:44:41.664224949Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 13 22:44:42 addons-975000 dockerd[1045]: time="2024-02-13T22:44:42.402768151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 13 22:44:42 addons-975000 dockerd[1045]: time="2024-02-13T22:44:42.402800942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 13 22:44:42 addons-975000 dockerd[1045]: time="2024-02-13T22:44:42.402809942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 13 22:44:42 addons-975000 dockerd[1045]: time="2024-02-13T22:44:42.402816234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 13 22:44:42 addons-975000 dockerd[1045]: time="2024-02-13T22:44:42.453652501Z" level=info msg="shim disconnected" id=b4c238f1c2eb60c8bb72e39837b75ebc0c7e5b27ed8853d86edc2a68fa4b5852 namespace=moby
	Feb 13 22:44:42 addons-975000 dockerd[1045]: time="2024-02-13T22:44:42.453686459Z" level=warning msg="cleaning up after shim disconnected" id=b4c238f1c2eb60c8bb72e39837b75ebc0c7e5b27ed8853d86edc2a68fa4b5852 namespace=moby
	Feb 13 22:44:42 addons-975000 dockerd[1045]: time="2024-02-13T22:44:42.453691042Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 13 22:44:42 addons-975000 dockerd[1039]: time="2024-02-13T22:44:42.454217371Z" level=info msg="ignoring event" container=b4c238f1c2eb60c8bb72e39837b75ebc0c7e5b27ed8853d86edc2a68fa4b5852 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 13 22:44:42 addons-975000 dockerd[1045]: time="2024-02-13T22:44:42.660401706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 13 22:44:42 addons-975000 dockerd[1045]: time="2024-02-13T22:44:42.660595080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 13 22:44:42 addons-975000 dockerd[1045]: time="2024-02-13T22:44:42.660608830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 13 22:44:42 addons-975000 dockerd[1045]: time="2024-02-13T22:44:42.660613955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 13 22:44:42 addons-975000 cri-dockerd[931]: time="2024-02-13T22:44:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6c9a3afb5700e3a5b65727609727d93739bea903b67d68002f8832f1e842775e/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Feb 13 22:44:42 addons-975000 dockerd[1045]: time="2024-02-13T22:44:42.845871305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 13 22:44:42 addons-975000 dockerd[1045]: time="2024-02-13T22:44:42.845901388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 13 22:44:42 addons-975000 dockerd[1045]: time="2024-02-13T22:44:42.845910221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 13 22:44:42 addons-975000 dockerd[1045]: time="2024-02-13T22:44:42.845922471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 13 22:44:42 addons-975000 dockerd[1045]: time="2024-02-13T22:44:42.887399986Z" level=info msg="shim disconnected" id=084023b764b042a97b6d29ce38de79cd85f2f79d139da4d7f2f6caa6d4b8f2a9 namespace=moby
	Feb 13 22:44:42 addons-975000 dockerd[1045]: time="2024-02-13T22:44:42.887545735Z" level=warning msg="cleaning up after shim disconnected" id=084023b764b042a97b6d29ce38de79cd85f2f79d139da4d7f2f6caa6d4b8f2a9 namespace=moby
	Feb 13 22:44:42 addons-975000 dockerd[1045]: time="2024-02-13T22:44:42.887620859Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 13 22:44:42 addons-975000 dockerd[1039]: time="2024-02-13T22:44:42.887860357Z" level=info msg="ignoring event" container=084023b764b042a97b6d29ce38de79cd85f2f79d139da4d7f2f6caa6d4b8f2a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                       ATTEMPT             POD ID              POD
	084023b764b04       fc9db2894f4e4                                                                                                                1 second ago         Exited              helper-pod                 0                   6c9a3afb5700e       helper-pod-delete-pvc-69fc1814-f173-4904-b8e0-9dadd6946f89
	b4c238f1c2eb6       dd1b12fcb6097                                                                                                                1 second ago         Exited              hello-world-app            2                   e81d3e742951a       hello-world-app-5d77478584-x5cfg
	e2f4c8fee6ef4       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              8 seconds ago        Exited              helper-pod                 0                   ad1c4eef067d5       helper-pod-create-pvc-69fc1814-f173-4904-b8e0-9dadd6946f89
	aacfe28f8e650       nginx@sha256:f2802c2a9d09c7aa3ace27445dfc5656ff24355da28e7b958074a0111e3fc076                                                31 seconds ago       Running             nginx                      0                   481c861457d05       nginx
	fd33256d5d7bf       gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b                          52 seconds ago       Exited              registry-test              0                   d8c1a53ba8e3b       registry-test
	491b9bfdeadee       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                 About a minute ago   Running             gcp-auth                   0                   9d6261cead58d       gcp-auth-d4c87556c-9qm2t
	7f68217663ef1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80   2 minutes ago        Exited              patch                      0                   5274f0528623e       ingress-nginx-admission-patch-tm4h4
	7ce8622d2e1f0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80   2 minutes ago        Exited              create                     0                   9b7f5ac55aa79       ingress-nginx-admission-create-thx8t
	b43a537a53ecd       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       2 minutes ago        Running             local-path-provisioner     0                   194b4e42a2dc7       local-path-provisioner-78b46b4d5c-hvhzw
	83ec6e4346320       nvcr.io/nvidia/k8s-device-plugin@sha256:339be23400f58c04f09b6ba1d4d2e0e7120648f2b114880513685b22093311f1                     3 minutes ago        Running             nvidia-device-plugin-ctr   0                   c944f54ba9e77       nvidia-device-plugin-daemonset-m85cq
	77acd76109565       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                        3 minutes ago        Running             yakd                       0                   837c71a4b783b       yakd-dashboard-9947fc6bf-cgk6p
	c85fc6b4a0b8d       gcr.io/cloud-spanner-emulator/emulator@sha256:5d905e581977bd3d543742e74ddb75c0ba65517cf19742089ae1be45b7b8aa49               3 minutes ago        Running             cloud-spanner-emulator     0                   dfd603e91e7eb       cloud-spanner-emulator-64c8c85f65-6pftl
	b6a885cf08698       ba04bb24b9575                                                                                                                3 minutes ago        Running             storage-provisioner        0                   a313d3f6b4e76       storage-provisioner
	958452a9f950e       97e04611ad434                                                                                                                3 minutes ago        Running             coredns                    0                   81040c79b2da4       coredns-5dd5756b68-w58vd
	9e39cf14fa620       3ca3ca488cf13                                                                                                                3 minutes ago        Running             kube-proxy                 0                   bf2855463417d       kube-proxy-7bg7s
	694357bbbef71       05c284c929889                                                                                                                4 minutes ago        Running             kube-scheduler             0                   67e871a5ca6d5       kube-scheduler-addons-975000
	dd29cf073b110       9cdd6470f48c8                                                                                                                4 minutes ago        Running             etcd                       0                   b31213206c52a       etcd-addons-975000
	423f538822427       9961cbceaf234                                                                                                                4 minutes ago        Running             kube-controller-manager    0                   0a75e56d692e9       kube-controller-manager-addons-975000
	4c9ab0a949efe       04b4c447bb9d4                                                                                                                4 minutes ago        Running             kube-apiserver             0                   b32994ffe0fe9       kube-apiserver-addons-975000
	
	
	==> coredns [958452a9f950] <==
	[INFO] 10.244.0.19:51803 - 9678 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000018124s
	[INFO] 10.244.0.19:59381 - 11839 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00002525s
	[INFO] 10.244.0.19:51803 - 24182 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000013624s
	[INFO] 10.244.0.19:59381 - 65281 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000027666s
	[INFO] 10.244.0.19:51803 - 26752 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000009916s
	[INFO] 10.244.0.19:59381 - 4937 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000024375s
	[INFO] 10.244.0.19:59381 - 2248 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000027708s
	[INFO] 10.244.0.19:51803 - 54798 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011167s
	[INFO] 10.244.0.19:51803 - 28454 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0000105s
	[INFO] 10.244.0.19:59381 - 62717 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000034499s
	[INFO] 10.244.0.19:51803 - 18307 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000011208s
	[INFO] 10.244.0.19:56386 - 25890 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000049207s
	[INFO] 10.244.0.19:56220 - 46571 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000084833s
	[INFO] 10.244.0.19:56220 - 1276 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000017292s
	[INFO] 10.244.0.19:56220 - 41706 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000015042s
	[INFO] 10.244.0.19:56220 - 46592 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000014166s
	[INFO] 10.244.0.19:56220 - 55686 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000013667s
	[INFO] 10.244.0.19:56220 - 8043 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000016s
	[INFO] 10.244.0.19:56220 - 18114 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000329913s
	[INFO] 10.244.0.19:56386 - 26991 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000023041s
	[INFO] 10.244.0.19:56386 - 61122 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000018084s
	[INFO] 10.244.0.19:56386 - 22292 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00002925s
	[INFO] 10.244.0.19:56386 - 12758 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000015958s
	[INFO] 10.244.0.19:56386 - 10811 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000015875s
	[INFO] 10.244.0.19:56386 - 36290 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000037041s
	
	
	==> describe nodes <==
	Name:               addons-975000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-975000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb52fe04bc8b044b129ef2ff27607d20a9fceb93
	                    minikube.k8s.io/name=addons-975000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_13T14_40_46_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-975000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Feb 2024 22:40:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-975000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Feb 2024 22:44:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Feb 2024 22:44:20 +0000   Tue, 13 Feb 2024 22:40:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Feb 2024 22:44:20 +0000   Tue, 13 Feb 2024 22:40:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Feb 2024 22:44:20 +0000   Tue, 13 Feb 2024 22:40:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Feb 2024 22:44:20 +0000   Tue, 13 Feb 2024 22:40:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-975000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904700Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904700Ki
	  pods:               110
	System Info:
	  Machine ID:                 02433e8609dc4ad586012d91ab05139f
	  System UUID:                02433e8609dc4ad586012d91ab05139f
	  Boot ID:                    7f0787be-628c-4885-950f-2392bc9937b4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-64c8c85f65-6pftl                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  default                     hello-world-app-5d77478584-x5cfg                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  default                     nginx                                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  gcp-auth                    gcp-auth-d4c87556c-9qm2t                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 coredns-5dd5756b68-w58vd                                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m44s
	  kube-system                 etcd-addons-975000                                            100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-apiserver-addons-975000                                  250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-controller-manager-addons-975000                         200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-proxy-7bg7s                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-scheduler-addons-975000                                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 nvidia-device-plugin-daemonset-m85cq                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 storage-provisioner                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  local-path-storage          helper-pod-delete-pvc-69fc1814-f173-4904-b8e0-9dadd6946f89    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	  local-path-storage          local-path-provisioner-78b46b4d5c-hvhzw                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-cgk6p                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m43s  kube-proxy       
	  Normal  Starting                 3m58s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m58s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m58s  kubelet          Node addons-975000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m58s  kubelet          Node addons-975000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m58s  kubelet          Node addons-975000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m55s  kubelet          Node addons-975000 status is now: NodeReady
	  Normal  RegisteredNode           3m45s  node-controller  Node addons-975000 event: Registered Node addons-975000 in Controller
	
	
	==> dmesg <==
	[  +0.390514] kauditd_printk_skb: 43 callbacks suppressed
	[  +0.034391] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.183223] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[  +0.078698] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[  +0.088652] systemd-fstab-generator[724]: Ignoring "noauto" for root device
	[  +1.260373] systemd-fstab-generator[888]: Ignoring "noauto" for root device
	[  +0.079958] systemd-fstab-generator[899]: Ignoring "noauto" for root device
	[  +0.077532] systemd-fstab-generator[910]: Ignoring "noauto" for root device
	[  +0.100987] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[  +2.576734] systemd-fstab-generator[1032]: Ignoring "noauto" for root device
	[  +2.018182] kauditd_printk_skb: 137 callbacks suppressed
	[  +2.524869] systemd-fstab-generator[1402]: Ignoring "noauto" for root device
	[  +4.591114] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.035694] systemd-fstab-generator[2252]: Ignoring "noauto" for root device
	[Feb13 22:41] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.908064] kauditd_printk_skb: 112 callbacks suppressed
	[ +10.202732] kauditd_printk_skb: 4 callbacks suppressed
	[Feb13 22:42] kauditd_printk_skb: 4 callbacks suppressed
	[Feb13 22:43] kauditd_printk_skb: 8 callbacks suppressed
	[ +29.859877] kauditd_printk_skb: 10 callbacks suppressed
	[Feb13 22:44] kauditd_printk_skb: 10 callbacks suppressed
	[  +8.633895] kauditd_printk_skb: 1 callbacks suppressed
	[  +7.663438] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.396680] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.649648] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [dd29cf073b11] <==
	{"level":"info","ts":"2024-02-13T22:40:42.402781Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-13T22:40:42.402847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-13T22:40:42.402864Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2024-02-13T22:40:42.402884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2024-02-13T22:40:42.402918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-02-13T22:40:42.402933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2024-02-13T22:40:42.402953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-02-13T22:40:42.403265Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T22:40:42.403552Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-975000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-13T22:40:42.403593Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T22:40:42.403685Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T22:40:42.403733Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T22:40:42.403774Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T22:40:42.404273Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2024-02-13T22:40:42.404313Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T22:40:42.404647Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-13T22:40:42.410191Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-13T22:40:42.410222Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-02-13T22:41:16.51181Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.759833ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10572"}
	{"level":"info","ts":"2024-02-13T22:41:16.511864Z","caller":"traceutil/trace.go:171","msg":"trace[387717095] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:818; }","duration":"112.822667ms","start":"2024-02-13T22:41:16.399021Z","end":"2024-02-13T22:41:16.511844Z","steps":["trace[387717095] 'range keys from in-memory index tree'  (duration: 112.704292ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T22:41:58.051881Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.272226ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10572"}
	{"level":"info","ts":"2024-02-13T22:41:58.051913Z","caller":"traceutil/trace.go:171","msg":"trace[585008143] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:925; }","duration":"153.309852ms","start":"2024-02-13T22:41:57.898596Z","end":"2024-02-13T22:41:58.051906Z","steps":["trace[585008143] 'range keys from in-memory index tree'  (duration: 153.139811ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-13T22:43:27.834585Z","caller":"traceutil/trace.go:171","msg":"trace[1400194638] transaction","detail":"{read_only:false; response_revision:1198; number_of_response:1; }","duration":"105.174336ms","start":"2024-02-13T22:43:27.729395Z","end":"2024-02-13T22:43:27.834569Z","steps":["trace[1400194638] 'process raft request'  (duration: 104.598139ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T22:43:55.799184Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.155232ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-02-13T22:43:55.799493Z","caller":"traceutil/trace.go:171","msg":"trace[1254751075] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1322; }","duration":"224.480603ms","start":"2024-02-13T22:43:55.575004Z","end":"2024-02-13T22:43:55.799485Z","steps":["trace[1254751075] 'range keys from in-memory index tree'  (duration: 224.099191ms)"],"step_count":1}
	
	
	==> gcp-auth [491b9bfdeade] <==
	2024/02/13 22:43:34 GCP Auth Webhook started!
	2024/02/13 22:43:45 Ready to marshal response ...
	2024/02/13 22:43:45 Ready to write response ...
	2024/02/13 22:43:47 Ready to marshal response ...
	2024/02/13 22:43:47 Ready to write response ...
	2024/02/13 22:44:09 Ready to marshal response ...
	2024/02/13 22:44:09 Ready to write response ...
	2024/02/13 22:44:17 Ready to marshal response ...
	2024/02/13 22:44:17 Ready to write response ...
	2024/02/13 22:44:19 Ready to marshal response ...
	2024/02/13 22:44:19 Ready to write response ...
	2024/02/13 22:44:33 Ready to marshal response ...
	2024/02/13 22:44:33 Ready to write response ...
	2024/02/13 22:44:33 Ready to marshal response ...
	2024/02/13 22:44:33 Ready to write response ...
	2024/02/13 22:44:42 Ready to marshal response ...
	2024/02/13 22:44:42 Ready to write response ...
	
	
	==> kernel <==
	 22:44:43 up 4 min,  0 users,  load average: 0.46, 0.53, 0.25
	Linux addons-975000 5.10.57 #1 SMP PREEMPT Thu Dec 28 19:03:47 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [4c9ab0a949ef] <==
	I0213 22:44:09.123423       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0213 22:44:09.223481       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.91.101"}
	I0213 22:44:19.493020       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.147.171"}
	I0213 22:44:32.816982       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 22:44:32.817002       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 22:44:32.819538       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 22:44:32.819553       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 22:44:32.823392       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 22:44:32.823409       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 22:44:32.828376       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 22:44:32.828389       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 22:44:32.828456       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 22:44:32.828466       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 22:44:32.833859       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 22:44:32.833869       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 22:44:32.838965       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 22:44:32.838978       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 22:44:32.844840       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 22:44:32.844854       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0213 22:44:33.829173       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0213 22:44:33.834854       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0213 22:44:33.853455       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0213 22:44:35.664765       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0213 22:44:37.425213       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	I0213 22:44:40.736857       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [423f53882242] <==
	I0213 22:44:33.051115       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	E0213 22:44:33.830099       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:44:33.835561       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:44:33.854145       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 22:44:34.792730       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:44:34.792753       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 22:44:35.062775       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:44:35.062797       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 22:44:35.299659       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:44:35.299678       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0213 22:44:35.627074       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0213 22:44:35.627438       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="2.5µs"
	I0213 22:44:35.629753       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W0213 22:44:37.186928       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:44:37.186946       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 22:44:37.625853       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:44:37.625878       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 22:44:38.471165       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:44:38.471181       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 22:44:42.364342       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:44:42.364361       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0213 22:44:42.541765       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="3.833µs"
	I0213 22:44:42.633155       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="26.041µs"
	W0213 22:44:43.102723       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:44:43.102744       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [9e39cf14fa62] <==
	I0213 22:40:59.780900       1 server_others.go:69] "Using iptables proxy"
	I0213 22:40:59.793752       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0213 22:40:59.827102       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0213 22:40:59.827117       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0213 22:40:59.846117       1 server_others.go:152] "Using iptables Proxier"
	I0213 22:40:59.846210       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0213 22:40:59.846335       1 server.go:846] "Version info" version="v1.28.4"
	I0213 22:40:59.846342       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0213 22:40:59.850722       1 config.go:188] "Starting service config controller"
	I0213 22:40:59.850748       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0213 22:40:59.850763       1 config.go:97] "Starting endpoint slice config controller"
	I0213 22:40:59.850766       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0213 22:40:59.851701       1 config.go:315] "Starting node config controller"
	I0213 22:40:59.851705       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0213 22:40:59.952338       1 shared_informer.go:318] Caches are synced for node config
	I0213 22:40:59.952353       1 shared_informer.go:318] Caches are synced for service config
	I0213 22:40:59.952365       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [694357bbbef7] <==
	W0213 22:40:43.364479       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0213 22:40:43.364505       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0213 22:40:43.364519       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0213 22:40:43.364528       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0213 22:40:43.364548       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0213 22:40:43.364564       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0213 22:40:43.364836       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0213 22:40:43.365585       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0213 22:40:43.365631       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0213 22:40:43.365656       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0213 22:40:43.365660       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0213 22:40:43.365663       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0213 22:40:43.365719       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0213 22:40:43.365722       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0213 22:40:43.365792       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0213 22:40:43.365794       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0213 22:40:43.365796       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0213 22:40:43.365798       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0213 22:40:43.365902       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0213 22:40:43.365905       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0213 22:40:44.195069       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0213 22:40:44.195090       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0213 22:40:44.299452       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0213 22:40:44.299472       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0213 22:40:44.860786       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-02-13 22:40:28 UTC, ends at Tue 2024-02-13 22:44:43 UTC. --
	Feb 13 22:44:41 addons-975000 kubelet[2271]: I0213 22:44:41.779267    2271 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/3d8b6840-83e4-4a0a-afb7-94305920c666-pvc-69fc1814-f173-4904-b8e0-9dadd6946f89\") pod \"3d8b6840-83e4-4a0a-afb7-94305920c666\" (UID: \"3d8b6840-83e4-4a0a-afb7-94305920c666\") "
	Feb 13 22:44:41 addons-975000 kubelet[2271]: I0213 22:44:41.779291    2271 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjvmc\" (UniqueName: \"kubernetes.io/projected/3d8b6840-83e4-4a0a-afb7-94305920c666-kube-api-access-kjvmc\") pod \"3d8b6840-83e4-4a0a-afb7-94305920c666\" (UID: \"3d8b6840-83e4-4a0a-afb7-94305920c666\") "
	Feb 13 22:44:41 addons-975000 kubelet[2271]: I0213 22:44:41.779303    2271 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3d8b6840-83e4-4a0a-afb7-94305920c666-gcp-creds\") pod \"3d8b6840-83e4-4a0a-afb7-94305920c666\" (UID: \"3d8b6840-83e4-4a0a-afb7-94305920c666\") "
	Feb 13 22:44:41 addons-975000 kubelet[2271]: I0213 22:44:41.779334    2271 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d8b6840-83e4-4a0a-afb7-94305920c666-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "3d8b6840-83e4-4a0a-afb7-94305920c666" (UID: "3d8b6840-83e4-4a0a-afb7-94305920c666"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Feb 13 22:44:41 addons-975000 kubelet[2271]: I0213 22:44:41.779345    2271 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d8b6840-83e4-4a0a-afb7-94305920c666-pvc-69fc1814-f173-4904-b8e0-9dadd6946f89" (OuterVolumeSpecName: "data") pod "3d8b6840-83e4-4a0a-afb7-94305920c666" (UID: "3d8b6840-83e4-4a0a-afb7-94305920c666"). InnerVolumeSpecName "pvc-69fc1814-f173-4904-b8e0-9dadd6946f89". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Feb 13 22:44:41 addons-975000 kubelet[2271]: I0213 22:44:41.780318    2271 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d8b6840-83e4-4a0a-afb7-94305920c666-kube-api-access-kjvmc" (OuterVolumeSpecName: "kube-api-access-kjvmc") pod "3d8b6840-83e4-4a0a-afb7-94305920c666" (UID: "3d8b6840-83e4-4a0a-afb7-94305920c666"). InnerVolumeSpecName "kube-api-access-kjvmc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 13 22:44:41 addons-975000 kubelet[2271]: I0213 22:44:41.879861    2271 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3d8b6840-83e4-4a0a-afb7-94305920c666-gcp-creds\") on node \"addons-975000\" DevicePath \"\""
	Feb 13 22:44:41 addons-975000 kubelet[2271]: I0213 22:44:41.879877    2271 reconciler_common.go:300] "Volume detached for volume \"pvc-69fc1814-f173-4904-b8e0-9dadd6946f89\" (UniqueName: \"kubernetes.io/host-path/3d8b6840-83e4-4a0a-afb7-94305920c666-pvc-69fc1814-f173-4904-b8e0-9dadd6946f89\") on node \"addons-975000\" DevicePath \"\""
	Feb 13 22:44:41 addons-975000 kubelet[2271]: I0213 22:44:41.879884    2271 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kjvmc\" (UniqueName: \"kubernetes.io/projected/3d8b6840-83e4-4a0a-afb7-94305920c666-kube-api-access-kjvmc\") on node \"addons-975000\" DevicePath \"\""
	Feb 13 22:44:42 addons-975000 kubelet[2271]: I0213 22:44:42.284647    2271 topology_manager.go:215] "Topology Admit Handler" podUID="ed4957c6-ddb5-4446-929a-ef7de682e317" podNamespace="local-path-storage" podName="helper-pod-delete-pvc-69fc1814-f173-4904-b8e0-9dadd6946f89"
	Feb 13 22:44:42 addons-975000 kubelet[2271]: E0213 22:44:42.284687    2271 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d8b6840-83e4-4a0a-afb7-94305920c666" containerName="busybox"
	Feb 13 22:44:42 addons-975000 kubelet[2271]: E0213 22:44:42.284723    2271 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="549cdf17-10db-4eaf-a173-d54df289ac11" containerName="minikube-ingress-dns"
	Feb 13 22:44:42 addons-975000 kubelet[2271]: I0213 22:44:42.284740    2271 memory_manager.go:346] "RemoveStaleState removing state" podUID="3d8b6840-83e4-4a0a-afb7-94305920c666" containerName="busybox"
	Feb 13 22:44:42 addons-975000 kubelet[2271]: I0213 22:44:42.284743    2271 memory_manager.go:346] "RemoveStaleState removing state" podUID="549cdf17-10db-4eaf-a173-d54df289ac11" containerName="minikube-ingress-dns"
	Feb 13 22:44:42 addons-975000 kubelet[2271]: I0213 22:44:42.284747    2271 memory_manager.go:346] "RemoveStaleState removing state" podUID="549cdf17-10db-4eaf-a173-d54df289ac11" containerName="minikube-ingress-dns"
	Feb 13 22:44:42 addons-975000 kubelet[2271]: I0213 22:44:42.348102    2271 scope.go:117] "RemoveContainer" containerID="e4e0f1f867d3c46398e9c2f84214d050aee656e1568887ce08e43e00d79a0e06"
	Feb 13 22:44:42 addons-975000 kubelet[2271]: I0213 22:44:42.351682    2271 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3d8b6840-83e4-4a0a-afb7-94305920c666" path="/var/lib/kubelet/pods/3d8b6840-83e4-4a0a-afb7-94305920c666/volumes"
	Feb 13 22:44:42 addons-975000 kubelet[2271]: I0213 22:44:42.382224    2271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/ed4957c6-ddb5-4446-929a-ef7de682e317-data\") pod \"helper-pod-delete-pvc-69fc1814-f173-4904-b8e0-9dadd6946f89\" (UID: \"ed4957c6-ddb5-4446-929a-ef7de682e317\") " pod="local-path-storage/helper-pod-delete-pvc-69fc1814-f173-4904-b8e0-9dadd6946f89"
	Feb 13 22:44:42 addons-975000 kubelet[2271]: I0213 22:44:42.382252    2271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7zgj\" (UniqueName: \"kubernetes.io/projected/ed4957c6-ddb5-4446-929a-ef7de682e317-kube-api-access-h7zgj\") pod \"helper-pod-delete-pvc-69fc1814-f173-4904-b8e0-9dadd6946f89\" (UID: \"ed4957c6-ddb5-4446-929a-ef7de682e317\") " pod="local-path-storage/helper-pod-delete-pvc-69fc1814-f173-4904-b8e0-9dadd6946f89"
	Feb 13 22:44:42 addons-975000 kubelet[2271]: I0213 22:44:42.382265    2271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ed4957c6-ddb5-4446-929a-ef7de682e317-gcp-creds\") pod \"helper-pod-delete-pvc-69fc1814-f173-4904-b8e0-9dadd6946f89\" (UID: \"ed4957c6-ddb5-4446-929a-ef7de682e317\") " pod="local-path-storage/helper-pod-delete-pvc-69fc1814-f173-4904-b8e0-9dadd6946f89"
	Feb 13 22:44:42 addons-975000 kubelet[2271]: I0213 22:44:42.382279    2271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/ed4957c6-ddb5-4446-929a-ef7de682e317-script\") pod \"helper-pod-delete-pvc-69fc1814-f173-4904-b8e0-9dadd6946f89\" (UID: \"ed4957c6-ddb5-4446-929a-ef7de682e317\") " pod="local-path-storage/helper-pod-delete-pvc-69fc1814-f173-4904-b8e0-9dadd6946f89"
	Feb 13 22:44:42 addons-975000 kubelet[2271]: I0213 22:44:42.628260    2271 scope.go:117] "RemoveContainer" containerID="e4e0f1f867d3c46398e9c2f84214d050aee656e1568887ce08e43e00d79a0e06"
	Feb 13 22:44:42 addons-975000 kubelet[2271]: I0213 22:44:42.628454    2271 scope.go:117] "RemoveContainer" containerID="b4c238f1c2eb60c8bb72e39837b75ebc0c7e5b27ed8853d86edc2a68fa4b5852"
	Feb 13 22:44:42 addons-975000 kubelet[2271]: E0213 22:44:42.628565    2271 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-x5cfg_default(1b7f3d15-5a06-4652-941e-cd477cb0ffce)\"" pod="default/hello-world-app-5d77478584-x5cfg" podUID="1b7f3d15-5a06-4652-941e-cd477cb0ffce"
	Feb 13 22:44:42 addons-975000 kubelet[2271]: I0213 22:44:42.645448    2271 scope.go:117] "RemoveContainer" containerID="70dce1f485540ee8f41125877e73bae32b6141e8d6964c7420c3d32f1ecee21d"
	
	
	==> storage-provisioner [b6a885cf0869] <==
	I0213 22:41:03.776452       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0213 22:41:03.813074       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0213 22:41:03.813098       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0213 22:41:03.828539       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0213 22:41:03.832798       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-975000_eed11878-5293-444f-b0a7-fcc184e582e3!
	I0213 22:41:03.833254       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d3873d72-e71f-45b0-9c7c-b25b8f720a60", APIVersion:"v1", ResourceVersion:"729", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-975000_eed11878-5293-444f-b0a7-fcc184e582e3 became leader
	I0213 22:41:03.934052       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-975000_eed11878-5293-444f-b0a7-fcc184e582e3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-975000 -n addons-975000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-975000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: helper-pod-delete-pvc-69fc1814-f173-4904-b8e0-9dadd6946f89
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-975000 describe pod helper-pod-delete-pvc-69fc1814-f173-4904-b8e0-9dadd6946f89
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-975000 describe pod helper-pod-delete-pvc-69fc1814-f173-4904-b8e0-9dadd6946f89: exit status 1 (40.11725ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "helper-pod-delete-pvc-69fc1814-f173-4904-b8e0-9dadd6946f89" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-975000 describe pod helper-pod-delete-pvc-69fc1814-f173-4904-b8e0-9dadd6946f89: exit status 1
--- FAIL: TestAddons/parallel/Ingress (34.62s)

                                                
                                    
x
+
TestCertOptions (12.24s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-732000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-732000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (11.951813459s)

                                                
                                                
-- stdout --
	* [cert-options-732000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-732000 in cluster cert-options-732000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-732000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-732000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-732000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-732000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-732000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (80.011416ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-732000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-732000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-732000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-732000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-732000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (43.901292ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-732000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-732000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-732000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-02-13 15:03:31.415694 -0800 PST m=+1487.347694501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-732000 -n cert-options-732000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-732000 -n cert-options-732000: exit status 7 (31.991333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-732000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-732000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-732000
--- FAIL: TestCertOptions (12.24s)
E0213 15:03:34.936644    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt: no such file or directory

                                                
                                    
x
+
TestCertExpiration (197.62s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-172000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-172000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (12.2053265s)

                                                
                                                
-- stdout --
	* [cert-expiration-172000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-172000 in cluster cert-expiration-172000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-172000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-172000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-172000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-172000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-172000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.237243084s)

                                                
                                                
-- stdout --
	* [cert-expiration-172000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-172000 in cluster cert-expiration-172000
	* Restarting existing qemu2 VM for "cert-expiration-172000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-172000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-172000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-172000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-172000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-172000 in cluster cert-expiration-172000
	* Restarting existing qemu2 VM for "cert-expiration-172000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-172000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-172000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-02-13 15:06:34.03815 -0800 PST m=+1669.963001626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-172000 -n cert-expiration-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-172000 -n cert-expiration-172000: exit status 7 (71.472166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-172000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-172000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-172000
--- FAIL: TestCertExpiration (197.62s)

                                                
                                    
x
+
TestDockerFlags (12.6s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-818000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-818000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.199237166s)

                                                
                                                
-- stdout --
	* [docker-flags-818000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-818000 in cluster docker-flags-818000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-818000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:03:06.739723    3275 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:03:06.739847    3275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:03:06.739850    3275 out.go:304] Setting ErrFile to fd 2...
	I0213 15:03:06.739852    3275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:03:06.739989    3275 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:03:06.741006    3275 out.go:298] Setting JSON to false
	I0213 15:03:06.758751    3275 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1808,"bootTime":1707863578,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:03:06.758846    3275 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:03:06.763828    3275 out.go:177] * [docker-flags-818000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:03:06.773833    3275 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:03:06.770911    3275 notify.go:220] Checking for updates...
	I0213 15:03:06.777776    3275 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:03:06.780798    3275 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:03:06.783809    3275 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:03:06.786841    3275 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:03:06.789785    3275 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:03:06.793172    3275 config.go:182] Loaded profile config "force-systemd-flag-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:03:06.793232    3275 config.go:182] Loaded profile config "multinode-078000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:03:06.793280    3275 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:03:06.797772    3275 out.go:177] * Using the qemu2 driver based on user configuration
	I0213 15:03:06.804823    3275 start.go:298] selected driver: qemu2
	I0213 15:03:06.804828    3275 start.go:902] validating driver "qemu2" against <nil>
	I0213 15:03:06.804832    3275 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:03:06.806932    3275 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:03:06.809735    3275 out.go:177] * Automatically selected the socket_vmnet network
	I0213 15:03:06.812861    3275 start_flags.go:922] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0213 15:03:06.812901    3275 cni.go:84] Creating CNI manager for ""
	I0213 15:03:06.812909    3275 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:03:06.812914    3275 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 15:03:06.812920    3275 start_flags.go:321] config:
	{Name:docker-flags-818000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-818000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:03:06.817323    3275 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:03:06.822787    3275 out.go:177] * Starting control plane node docker-flags-818000 in cluster docker-flags-818000
	I0213 15:03:06.826817    3275 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:03:06.826834    3275 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 15:03:06.826842    3275 cache.go:56] Caching tarball of preloaded images
	I0213 15:03:06.826892    3275 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:03:06.826897    3275 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 15:03:06.826963    3275 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/docker-flags-818000/config.json ...
	I0213 15:03:06.826973    3275 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/docker-flags-818000/config.json: {Name:mk1be4037d445e98182d4413cf3a2dcf6a6063b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:03:06.827206    3275 start.go:365] acquiring machines lock for docker-flags-818000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:03:08.773459    3275 start.go:369] acquired machines lock for "docker-flags-818000" in 1.946219542s
	I0213 15:03:08.773558    3275 start.go:93] Provisioning new machine with config: &{Name:docker-flags-818000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-818000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:03:08.773818    3275 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:03:08.779225    3275 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0213 15:03:08.827627    3275 start.go:159] libmachine.API.Create for "docker-flags-818000" (driver="qemu2")
	I0213 15:03:08.827681    3275 client.go:168] LocalClient.Create starting
	I0213 15:03:08.827877    3275 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:03:08.827951    3275 main.go:141] libmachine: Decoding PEM data...
	I0213 15:03:08.827981    3275 main.go:141] libmachine: Parsing certificate...
	I0213 15:03:08.828049    3275 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:03:08.828093    3275 main.go:141] libmachine: Decoding PEM data...
	I0213 15:03:08.828108    3275 main.go:141] libmachine: Parsing certificate...
	I0213 15:03:08.828789    3275 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:03:08.960126    3275 main.go:141] libmachine: Creating SSH key...
	I0213 15:03:09.095653    3275 main.go:141] libmachine: Creating Disk image...
	I0213 15:03:09.095660    3275 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:03:09.095868    3275 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/docker-flags-818000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/docker-flags-818000/disk.qcow2
	I0213 15:03:09.108678    3275 main.go:141] libmachine: STDOUT: 
	I0213 15:03:09.108702    3275 main.go:141] libmachine: STDERR: 
	I0213 15:03:09.108752    3275 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/docker-flags-818000/disk.qcow2 +20000M
	I0213 15:03:09.119431    3275 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:03:09.119453    3275 main.go:141] libmachine: STDERR: 
	I0213 15:03:09.119465    3275 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/docker-flags-818000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/docker-flags-818000/disk.qcow2
	I0213 15:03:09.119475    3275 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:03:09.119515    3275 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/docker-flags-818000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/docker-flags-818000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/docker-flags-818000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:a9:97:35:3f:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/docker-flags-818000/disk.qcow2
	I0213 15:03:09.121266    3275 main.go:141] libmachine: STDOUT: 
	I0213 15:03:09.121281    3275 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:03:09.121297    3275 client.go:171] LocalClient.Create took 293.618833ms
	I0213 15:03:11.123399    3275 start.go:128] duration metric: createHost completed in 2.349619209s
	I0213 15:03:11.123464    3275 start.go:83] releasing machines lock for "docker-flags-818000", held for 2.350036042s
	W0213 15:03:11.123541    3275 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:03:11.135777    3275 out.go:177] * Deleting "docker-flags-818000" in qemu2 ...
	W0213 15:03:11.158003    3275 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:03:11.158043    3275 start.go:709] Will try again in 5 seconds ...
	I0213 15:03:16.160140    3275 start.go:365] acquiring machines lock for docker-flags-818000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:03:16.201247    3275 start.go:369] acquired machines lock for "docker-flags-818000" in 40.994ms
	I0213 15:03:16.201396    3275 start.go:93] Provisioning new machine with config: &{Name:docker-flags-818000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-818000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:03:16.201661    3275 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:03:16.210129    3275 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0213 15:03:16.259094    3275 start.go:159] libmachine.API.Create for "docker-flags-818000" (driver="qemu2")
	I0213 15:03:16.259141    3275 client.go:168] LocalClient.Create starting
	I0213 15:03:16.259286    3275 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:03:16.259338    3275 main.go:141] libmachine: Decoding PEM data...
	I0213 15:03:16.259357    3275 main.go:141] libmachine: Parsing certificate...
	I0213 15:03:16.259420    3275 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:03:16.259447    3275 main.go:141] libmachine: Decoding PEM data...
	I0213 15:03:16.259465    3275 main.go:141] libmachine: Parsing certificate...
	I0213 15:03:16.259991    3275 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:03:16.451178    3275 main.go:141] libmachine: Creating SSH key...
	I0213 15:03:16.842350    3275 main.go:141] libmachine: Creating Disk image...
	I0213 15:03:16.842360    3275 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:03:16.842552    3275 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/docker-flags-818000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/docker-flags-818000/disk.qcow2
	I0213 15:03:16.855255    3275 main.go:141] libmachine: STDOUT: 
	I0213 15:03:16.855277    3275 main.go:141] libmachine: STDERR: 
	I0213 15:03:16.855333    3275 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/docker-flags-818000/disk.qcow2 +20000M
	I0213 15:03:16.866120    3275 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:03:16.866137    3275 main.go:141] libmachine: STDERR: 
	I0213 15:03:16.866150    3275 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/docker-flags-818000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/docker-flags-818000/disk.qcow2
	I0213 15:03:16.866157    3275 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:03:16.866202    3275 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/docker-flags-818000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/docker-flags-818000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/docker-flags-818000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:1e:23:9d:95:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/docker-flags-818000/disk.qcow2
	I0213 15:03:16.867846    3275 main.go:141] libmachine: STDOUT: 
	I0213 15:03:16.867861    3275 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:03:16.867874    3275 client.go:171] LocalClient.Create took 608.746834ms
	I0213 15:03:18.870002    3275 start.go:128] duration metric: createHost completed in 2.668389125s
	I0213 15:03:18.870109    3275 start.go:83] releasing machines lock for "docker-flags-818000", held for 2.668875792s
	W0213 15:03:18.870403    3275 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-818000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-818000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:03:18.883263    3275 out.go:177] 
	W0213 15:03:18.887276    3275 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:03:18.887403    3275 out.go:239] * 
	* 
	W0213 15:03:18.889888    3275 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:03:18.898237    3275 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-818000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-818000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-818000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (100.312625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-818000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-818000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-818000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-818000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-818000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-818000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (105.81ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-818000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-818000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-818000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-818000\"\n"
panic.go:523: *** TestDockerFlags FAILED at 2024-02-13 15:03:19.11659 -0800 PST m=+1475.048215584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-818000 -n docker-flags-818000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-818000 -n docker-flags-818000: exit status 7 (38.986666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-818000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-818000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-818000
--- FAIL: TestDockerFlags (12.60s)

                                                
                                    
x
+
TestForceSystemdFlag (11.7s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-294000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-294000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.361316s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-294000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-294000 in cluster force-systemd-flag-294000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-294000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:03:04.910056    3256 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:03:04.910188    3256 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:03:04.910191    3256 out.go:304] Setting ErrFile to fd 2...
	I0213 15:03:04.910194    3256 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:03:04.910318    3256 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:03:04.911393    3256 out.go:298] Setting JSON to false
	I0213 15:03:04.927214    3256 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1806,"bootTime":1707863578,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:03:04.927302    3256 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:03:04.933390    3256 out.go:177] * [force-systemd-flag-294000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:03:04.940329    3256 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:03:04.940367    3256 notify.go:220] Checking for updates...
	I0213 15:03:04.944334    3256 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:03:04.947325    3256 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:03:04.950305    3256 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:03:04.953295    3256 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:03:04.956317    3256 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:03:04.959706    3256 config.go:182] Loaded profile config "force-systemd-env-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:03:04.959772    3256 config.go:182] Loaded profile config "multinode-078000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:03:04.959821    3256 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:03:04.963259    3256 out.go:177] * Using the qemu2 driver based on user configuration
	I0213 15:03:04.970293    3256 start.go:298] selected driver: qemu2
	I0213 15:03:04.970302    3256 start.go:902] validating driver "qemu2" against <nil>
	I0213 15:03:04.970307    3256 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:03:04.972564    3256 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:03:04.974075    3256 out.go:177] * Automatically selected the socket_vmnet network
	I0213 15:03:04.977404    3256 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 15:03:04.977459    3256 cni.go:84] Creating CNI manager for ""
	I0213 15:03:04.977467    3256 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:03:04.977472    3256 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 15:03:04.977478    3256 start_flags.go:321] config:
	{Name:force-systemd-flag-294000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-294000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:03:04.981887    3256 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:03:04.985349    3256 out.go:177] * Starting control plane node force-systemd-flag-294000 in cluster force-systemd-flag-294000
	I0213 15:03:04.993276    3256 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:03:04.993289    3256 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 15:03:04.993295    3256 cache.go:56] Caching tarball of preloaded images
	I0213 15:03:04.993351    3256 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:03:04.993356    3256 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 15:03:04.993427    3256 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/force-systemd-flag-294000/config.json ...
	I0213 15:03:04.993438    3256 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/force-systemd-flag-294000/config.json: {Name:mk9431bf4ce6c40b75818342a2f7b206bd8faf24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:03:04.993660    3256 start.go:365] acquiring machines lock for force-systemd-flag-294000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:03:06.314676    3256 start.go:369] acquired machines lock for "force-systemd-flag-294000" in 1.320971125s
	I0213 15:03:06.314836    3256 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-294000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:03:06.315104    3256 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:03:06.324896    3256 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0213 15:03:06.373348    3256 start.go:159] libmachine.API.Create for "force-systemd-flag-294000" (driver="qemu2")
	I0213 15:03:06.373399    3256 client.go:168] LocalClient.Create starting
	I0213 15:03:06.373505    3256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:03:06.373568    3256 main.go:141] libmachine: Decoding PEM data...
	I0213 15:03:06.373588    3256 main.go:141] libmachine: Parsing certificate...
	I0213 15:03:06.373653    3256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:03:06.373696    3256 main.go:141] libmachine: Decoding PEM data...
	I0213 15:03:06.373710    3256 main.go:141] libmachine: Parsing certificate...
	I0213 15:03:06.374314    3256 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:03:06.569770    3256 main.go:141] libmachine: Creating SSH key...
	I0213 15:03:06.743151    3256 main.go:141] libmachine: Creating Disk image...
	I0213 15:03:06.743162    3256 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:03:06.743339    3256 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-flag-294000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-flag-294000/disk.qcow2
	I0213 15:03:06.755965    3256 main.go:141] libmachine: STDOUT: 
	I0213 15:03:06.755988    3256 main.go:141] libmachine: STDERR: 
	I0213 15:03:06.756057    3256 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-flag-294000/disk.qcow2 +20000M
	I0213 15:03:06.769104    3256 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:03:06.769126    3256 main.go:141] libmachine: STDERR: 
	I0213 15:03:06.769144    3256 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-flag-294000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-flag-294000/disk.qcow2
	I0213 15:03:06.769149    3256 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:03:06.769219    3256 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-flag-294000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-flag-294000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-flag-294000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:43:6a:f1:b4:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-flag-294000/disk.qcow2
	I0213 15:03:06.770995    3256 main.go:141] libmachine: STDOUT: 
	I0213 15:03:06.771015    3256 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:03:06.771036    3256 client.go:171] LocalClient.Create took 397.641875ms
	I0213 15:03:08.773187    3256 start.go:128] duration metric: createHost completed in 2.458121791s
	I0213 15:03:08.773270    3256 start.go:83] releasing machines lock for "force-systemd-flag-294000", held for 2.458626083s
	W0213 15:03:08.773329    3256 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:03:08.787166    3256 out.go:177] * Deleting "force-systemd-flag-294000" in qemu2 ...
	W0213 15:03:08.810872    3256 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:03:08.810890    3256 start.go:709] Will try again in 5 seconds ...
	I0213 15:03:13.812206    3256 start.go:365] acquiring machines lock for force-systemd-flag-294000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:03:13.812737    3256 start.go:369] acquired machines lock for "force-systemd-flag-294000" in 408.459µs
	I0213 15:03:13.812957    3256 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-294000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:03:13.813244    3256 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:03:13.818914    3256 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0213 15:03:13.870330    3256 start.go:159] libmachine.API.Create for "force-systemd-flag-294000" (driver="qemu2")
	I0213 15:03:13.870382    3256 client.go:168] LocalClient.Create starting
	I0213 15:03:13.870503    3256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:03:13.870573    3256 main.go:141] libmachine: Decoding PEM data...
	I0213 15:03:13.870591    3256 main.go:141] libmachine: Parsing certificate...
	I0213 15:03:13.870643    3256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:03:13.870691    3256 main.go:141] libmachine: Decoding PEM data...
	I0213 15:03:13.870702    3256 main.go:141] libmachine: Parsing certificate...
	I0213 15:03:13.871249    3256 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:03:14.004148    3256 main.go:141] libmachine: Creating SSH key...
	I0213 15:03:14.173046    3256 main.go:141] libmachine: Creating Disk image...
	I0213 15:03:14.173057    3256 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:03:14.173254    3256 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-flag-294000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-flag-294000/disk.qcow2
	I0213 15:03:14.186020    3256 main.go:141] libmachine: STDOUT: 
	I0213 15:03:14.186044    3256 main.go:141] libmachine: STDERR: 
	I0213 15:03:14.186126    3256 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-flag-294000/disk.qcow2 +20000M
	I0213 15:03:14.197070    3256 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:03:14.197087    3256 main.go:141] libmachine: STDERR: 
	I0213 15:03:14.197106    3256 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-flag-294000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-flag-294000/disk.qcow2
	I0213 15:03:14.197114    3256 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:03:14.197160    3256 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-flag-294000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-flag-294000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-flag-294000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:66:27:0f:51:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-flag-294000/disk.qcow2
	I0213 15:03:14.198897    3256 main.go:141] libmachine: STDOUT: 
	I0213 15:03:14.198914    3256 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:03:14.198927    3256 client.go:171] LocalClient.Create took 328.549458ms
	I0213 15:03:16.201071    3256 start.go:128] duration metric: createHost completed in 2.387875s
	I0213 15:03:16.201130    3256 start.go:83] releasing machines lock for "force-systemd-flag-294000", held for 2.388395542s
	W0213 15:03:16.201462    3256 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-294000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-294000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:03:16.215140    3256 out.go:177] 
	W0213 15:03:16.219359    3256 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:03:16.219413    3256 out.go:239] * 
	* 
	W0213 15:03:16.221694    3256 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:03:16.233131    3256 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-294000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-294000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-294000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (100.739667ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-294000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-294000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-02-13 15:03:16.344573 -0800 PST m=+1472.276114376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-294000 -n force-systemd-flag-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-294000 -n force-systemd-flag-294000: exit status 7 (41.32125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-294000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-294000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-294000
--- FAIL: TestForceSystemdFlag (11.70s)

                                                
                                    
x
+
TestForceSystemdEnv (10.16s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-056000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-056000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.813646583s)

                                                
                                                
-- stdout --
	* [force-systemd-env-056000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-056000 in cluster force-systemd-env-056000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-056000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:02:56.577566    3218 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:02:56.577728    3218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:02:56.577735    3218 out.go:304] Setting ErrFile to fd 2...
	I0213 15:02:56.577738    3218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:02:56.577871    3218 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:02:56.578897    3218 out.go:298] Setting JSON to false
	I0213 15:02:56.594984    3218 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1798,"bootTime":1707863578,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:02:56.595052    3218 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:02:56.601373    3218 out.go:177] * [force-systemd-env-056000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:02:56.609021    3218 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:02:56.609056    3218 notify.go:220] Checking for updates...
	I0213 15:02:56.616210    3218 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:02:56.623139    3218 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:02:56.626176    3218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:02:56.627713    3218 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:02:56.631124    3218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0213 15:02:56.634524    3218 config.go:182] Loaded profile config "multinode-078000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:02:56.634570    3218 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:02:56.639006    3218 out.go:177] * Using the qemu2 driver based on user configuration
	I0213 15:02:56.646106    3218 start.go:298] selected driver: qemu2
	I0213 15:02:56.646110    3218 start.go:902] validating driver "qemu2" against <nil>
	I0213 15:02:56.646115    3218 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:02:56.648542    3218 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:02:56.651985    3218 out.go:177] * Automatically selected the socket_vmnet network
	I0213 15:02:56.655174    3218 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 15:02:56.655226    3218 cni.go:84] Creating CNI manager for ""
	I0213 15:02:56.655234    3218 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:02:56.655241    3218 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 15:02:56.655248    3218 start_flags.go:321] config:
	{Name:force-systemd-env-056000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-056000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:02:56.659981    3218 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:02:56.666123    3218 out.go:177] * Starting control plane node force-systemd-env-056000 in cluster force-systemd-env-056000
	I0213 15:02:56.670150    3218 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:02:56.670166    3218 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 15:02:56.670175    3218 cache.go:56] Caching tarball of preloaded images
	I0213 15:02:56.670239    3218 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:02:56.670246    3218 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 15:02:56.670321    3218 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/force-systemd-env-056000/config.json ...
	I0213 15:02:56.670334    3218 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/force-systemd-env-056000/config.json: {Name:mk48ab7360692a88d453400660d7a6ef7a790b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:02:56.670549    3218 start.go:365] acquiring machines lock for force-systemd-env-056000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:02:56.670586    3218 start.go:369] acquired machines lock for "force-systemd-env-056000" in 28.625µs
	I0213 15:02:56.670599    3218 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-056000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-056000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:02:56.670632    3218 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:02:56.678147    3218 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0213 15:02:56.696191    3218 start.go:159] libmachine.API.Create for "force-systemd-env-056000" (driver="qemu2")
	I0213 15:02:56.696225    3218 client.go:168] LocalClient.Create starting
	I0213 15:02:56.696307    3218 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:02:56.696339    3218 main.go:141] libmachine: Decoding PEM data...
	I0213 15:02:56.696347    3218 main.go:141] libmachine: Parsing certificate...
	I0213 15:02:56.696392    3218 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:02:56.696415    3218 main.go:141] libmachine: Decoding PEM data...
	I0213 15:02:56.696422    3218 main.go:141] libmachine: Parsing certificate...
	I0213 15:02:56.696774    3218 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:02:56.818590    3218 main.go:141] libmachine: Creating SSH key...
	I0213 15:02:56.952326    3218 main.go:141] libmachine: Creating Disk image...
	I0213 15:02:56.952333    3218 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:02:56.952513    3218 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-env-056000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-env-056000/disk.qcow2
	I0213 15:02:56.964890    3218 main.go:141] libmachine: STDOUT: 
	I0213 15:02:56.964914    3218 main.go:141] libmachine: STDERR: 
	I0213 15:02:56.964970    3218 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-env-056000/disk.qcow2 +20000M
	I0213 15:02:56.975663    3218 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:02:56.975686    3218 main.go:141] libmachine: STDERR: 
	I0213 15:02:56.975703    3218 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-env-056000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-env-056000/disk.qcow2
	I0213 15:02:56.975710    3218 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:02:56.975737    3218 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-env-056000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-env-056000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-env-056000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:b8:e7:61:75:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-env-056000/disk.qcow2
	I0213 15:02:56.977489    3218 main.go:141] libmachine: STDOUT: 
	I0213 15:02:56.977506    3218 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:02:56.977525    3218 client.go:171] LocalClient.Create took 281.304166ms
	I0213 15:02:58.979733    3218 start.go:128] duration metric: createHost completed in 2.309142167s
	I0213 15:02:58.979799    3218 start.go:83] releasing machines lock for "force-systemd-env-056000", held for 2.309273292s
	W0213 15:02:58.979845    3218 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:02:58.986028    3218 out.go:177] * Deleting "force-systemd-env-056000" in qemu2 ...
	W0213 15:02:59.009606    3218 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:02:59.009641    3218 start.go:709] Will try again in 5 seconds ...
	I0213 15:03:04.011762    3218 start.go:365] acquiring machines lock for force-systemd-env-056000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:03:04.012181    3218 start.go:369] acquired machines lock for "force-systemd-env-056000" in 318.5µs
	I0213 15:03:04.012307    3218 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-056000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-056000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:03:04.012491    3218 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:03:04.022137    3218 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0213 15:03:04.070484    3218 start.go:159] libmachine.API.Create for "force-systemd-env-056000" (driver="qemu2")
	I0213 15:03:04.070534    3218 client.go:168] LocalClient.Create starting
	I0213 15:03:04.070648    3218 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:03:04.070713    3218 main.go:141] libmachine: Decoding PEM data...
	I0213 15:03:04.070729    3218 main.go:141] libmachine: Parsing certificate...
	I0213 15:03:04.070787    3218 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:03:04.070828    3218 main.go:141] libmachine: Decoding PEM data...
	I0213 15:03:04.070841    3218 main.go:141] libmachine: Parsing certificate...
	I0213 15:03:04.071333    3218 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:03:04.208293    3218 main.go:141] libmachine: Creating SSH key...
	I0213 15:03:04.285506    3218 main.go:141] libmachine: Creating Disk image...
	I0213 15:03:04.285513    3218 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:03:04.285710    3218 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-env-056000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-env-056000/disk.qcow2
	I0213 15:03:04.298681    3218 main.go:141] libmachine: STDOUT: 
	I0213 15:03:04.298701    3218 main.go:141] libmachine: STDERR: 
	I0213 15:03:04.298785    3218 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-env-056000/disk.qcow2 +20000M
	I0213 15:03:04.310301    3218 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:03:04.310320    3218 main.go:141] libmachine: STDERR: 
	I0213 15:03:04.310335    3218 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-env-056000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-env-056000/disk.qcow2
	I0213 15:03:04.310343    3218 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:03:04.310387    3218 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-env-056000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-env-056000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-env-056000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:b2:7b:dd:7a:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/force-systemd-env-056000/disk.qcow2
	I0213 15:03:04.312136    3218 main.go:141] libmachine: STDOUT: 
	I0213 15:03:04.312154    3218 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:03:04.312164    3218 client.go:171] LocalClient.Create took 241.631459ms
	I0213 15:03:06.314401    3218 start.go:128] duration metric: createHost completed in 2.301897291s
	I0213 15:03:06.314501    3218 start.go:83] releasing machines lock for "force-systemd-env-056000", held for 2.302360625s
	W0213 15:03:06.314821    3218 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-056000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-056000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:03:06.333889    3218 out.go:177] 
	W0213 15:03:06.337917    3218 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:03:06.337948    3218 out.go:239] * 
	* 
	W0213 15:03:06.340041    3218 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:03:06.349838    3218 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-056000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-056000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-056000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (108.417459ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-056000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-056000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-02-13 15:03:06.470225 -0800 PST m=+1462.401464918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-056000 -n force-systemd-env-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-056000 -n force-systemd-env-056000: exit status 7 (39.560958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-056000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-056000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-056000
--- FAIL: TestForceSystemdEnv (10.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (27.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-023000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-023000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-xxjct" [a91f83b1-de3a-4e82-bbc1-a07654eda3a2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-xxjct" [a91f83b1-de3a-4e82-bbc1-a07654eda3a2] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004256875s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:30122
functional_test.go:1657: error fetching http://192.168.105.4:30122: Get "http://192.168.105.4:30122": dial tcp 192.168.105.4:30122: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30122: Get "http://192.168.105.4:30122": dial tcp 192.168.105.4:30122: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30122: Get "http://192.168.105.4:30122": dial tcp 192.168.105.4:30122: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30122: Get "http://192.168.105.4:30122": dial tcp 192.168.105.4:30122: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30122: Get "http://192.168.105.4:30122": dial tcp 192.168.105.4:30122: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30122: Get "http://192.168.105.4:30122": dial tcp 192.168.105.4:30122: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30122: Get "http://192.168.105.4:30122": dial tcp 192.168.105.4:30122: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:30122: Get "http://192.168.105.4:30122": dial tcp 192.168.105.4:30122: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-023000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-7799dfb7c6-xxjct
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-023000/192.168.105.4
Start Time:       Tue, 13 Feb 2024 14:51:34 -0800
Labels:           app=hello-node-connect
pod-template-hash=7799dfb7c6
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7799dfb7c6
Containers:
echoserver-arm:
Container ID:   docker://c51898944fdad30b6c7957d1dd1764799b5d3bb854969169c1bb78f3d1a4d9f0
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Tue, 13 Feb 2024 14:51:50 -0800
Finished:     Tue, 13 Feb 2024 14:51:50 -0800
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Tue, 13 Feb 2024 14:51:36 -0800
Finished:     Tue, 13 Feb 2024 14:51:36 -0800
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xrr66 (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-xrr66:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  26s                default-scheduler  Successfully assigned default/hello-node-connect-7799dfb7c6-xxjct to functional-023000
Normal   Pulled     11s (x3 over 26s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    11s (x3 over 26s)  kubelet            Created container echoserver-arm
Normal   Started    11s (x3 over 26s)  kubelet            Started container echoserver-arm
Warning  BackOff    10s (x2 over 24s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-7799dfb7c6-xxjct_default(a91f83b1-de3a-4e82-bbc1-a07654eda3a2)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-023000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-023000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.106.34.99
IPs:                      10.106.34.99
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30122/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-023000 -n functional-023000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image   | functional-023000 image load                                                                                         | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:51 PST | 13 Feb 24 14:51 PST |
	|         | /Users/jenkins/workspace/addon-resizer-save.tar                                                                      |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| service | functional-023000                                                                                                    | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:51 PST | 13 Feb 24 14:51 PST |
	|         | service hello-node --url                                                                                             |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                     |                   |         |         |                     |                     |
	| service | functional-023000 service                                                                                            | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:51 PST | 13 Feb 24 14:51 PST |
	|         | hello-node --url                                                                                                     |                   |         |         |                     |                     |
	| ssh     | functional-023000 ssh echo                                                                                           | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:51 PST | 13 Feb 24 14:51 PST |
	|         | hello                                                                                                                |                   |         |         |                     |                     |
	| ssh     | functional-023000 ssh cat                                                                                            | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:51 PST | 13 Feb 24 14:51 PST |
	|         | /etc/hostname                                                                                                        |                   |         |         |                     |                     |
	| tunnel  | functional-023000 tunnel                                                                                             | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:51 PST |                     |
	|         | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| tunnel  | functional-023000 tunnel                                                                                             | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:51 PST |                     |
	|         | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| image   | functional-023000 image ls                                                                                           | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:51 PST | 13 Feb 24 14:51 PST |
	| image   | functional-023000 image save --daemon                                                                                | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:51 PST | 13 Feb 24 14:51 PST |
	|         | gcr.io/google-containers/addon-resizer:functional-023000                                                             |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| tunnel  | functional-023000 tunnel                                                                                             | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:51 PST |                     |
	|         | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| addons  | functional-023000 addons list                                                                                        | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:51 PST | 13 Feb 24 14:51 PST |
	| addons  | functional-023000 addons list                                                                                        | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:51 PST | 13 Feb 24 14:51 PST |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-023000 service                                                                                            | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:51 PST | 13 Feb 24 14:51 PST |
	|         | hello-node-connect --url                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-023000 ssh findmnt                                                                                        | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:51 PST |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-023000                                                                                                 | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:51 PST |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2134871966/001:/mount-9p      |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-023000 ssh findmnt                                                                                        | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:51 PST |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-023000 ssh findmnt                                                                                        | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:51 PST |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-023000 ssh findmnt                                                                                        | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:51 PST | 13 Feb 24 14:51 PST |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-023000 ssh -- ls                                                                                          | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:51 PST | 13 Feb 24 14:51 PST |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-023000 ssh cat                                                                                            | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:51 PST | 13 Feb 24 14:51 PST |
	|         | /mount-9p/test-1707864710097207000                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-023000 ssh stat                                                                                           | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|         | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-023000 ssh stat                                                                                           | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|         | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-023000 ssh sudo                                                                                           | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount   | -p functional-023000                                                                                                 | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2131026763/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-023000 ssh findmnt                                                                                        | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 14:50:28
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 14:50:28.887430    2040 out.go:291] Setting OutFile to fd 1 ...
	I0213 14:50:28.887563    2040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:50:28.887565    2040 out.go:304] Setting ErrFile to fd 2...
	I0213 14:50:28.887567    2040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:50:28.887712    2040 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 14:50:28.888712    2040 out.go:298] Setting JSON to false
	I0213 14:50:28.905215    2040 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1050,"bootTime":1707863578,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 14:50:28.905318    2040 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 14:50:28.909988    2040 out.go:177] * [functional-023000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 14:50:28.913988    2040 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 14:50:28.918037    2040 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 14:50:28.914073    2040 notify.go:220] Checking for updates...
	I0213 14:50:28.924966    2040 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 14:50:28.927991    2040 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 14:50:28.930959    2040 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 14:50:28.933962    2040 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 14:50:28.937215    2040 config.go:182] Loaded profile config "functional-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 14:50:28.937261    2040 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 14:50:28.940821    2040 out.go:177] * Using the qemu2 driver based on existing profile
	I0213 14:50:28.947953    2040 start.go:298] selected driver: qemu2
	I0213 14:50:28.947955    2040 start.go:902] validating driver "qemu2" against &{Name:functional-023000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:functional-023000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 14:50:28.947994    2040 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 14:50:28.950191    2040 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 14:50:28.950232    2040 cni.go:84] Creating CNI manager for ""
	I0213 14:50:28.950238    2040 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 14:50:28.950242    2040 start_flags.go:321] config:
	{Name:functional-023000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-023000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 14:50:28.954404    2040 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 14:50:28.961926    2040 out.go:177] * Starting control plane node functional-023000 in cluster functional-023000
	I0213 14:50:28.965899    2040 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 14:50:28.965910    2040 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 14:50:28.965917    2040 cache.go:56] Caching tarball of preloaded images
	I0213 14:50:28.965973    2040 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 14:50:28.965976    2040 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 14:50:28.966041    2040 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/config.json ...
	I0213 14:50:28.966485    2040 start.go:365] acquiring machines lock for functional-023000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 14:50:28.966515    2040 start.go:369] acquired machines lock for "functional-023000" in 26.833µs
	I0213 14:50:28.966522    2040 start.go:96] Skipping create...Using existing machine configuration
	I0213 14:50:28.966527    2040 fix.go:54] fixHost starting: 
	I0213 14:50:28.967165    2040 fix.go:102] recreateIfNeeded on functional-023000: state=Running err=<nil>
	W0213 14:50:28.967171    2040 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 14:50:28.975983    2040 out.go:177] * Updating the running qemu2 "functional-023000" VM ...
	I0213 14:50:28.979906    2040 machine.go:88] provisioning docker machine ...
	I0213 14:50:28.979914    2040 buildroot.go:166] provisioning hostname "functional-023000"
	I0213 14:50:28.979938    2040 main.go:141] libmachine: Using SSH client type: native
	I0213 14:50:28.980174    2040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10529b8e0] 0x10529e050 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0213 14:50:28.980178    2040 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-023000 && echo "functional-023000" | sudo tee /etc/hostname
	I0213 14:50:29.032669    2040 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-023000
	
	I0213 14:50:29.032714    2040 main.go:141] libmachine: Using SSH client type: native
	I0213 14:50:29.032942    2040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10529b8e0] 0x10529e050 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0213 14:50:29.032949    2040 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-023000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-023000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-023000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 14:50:29.081059    2040 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 14:50:29.081066    2040 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18170-979/.minikube CaCertPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18170-979/.minikube}
	I0213 14:50:29.081073    2040 buildroot.go:174] setting up certificates
	I0213 14:50:29.081078    2040 provision.go:83] configureAuth start
	I0213 14:50:29.081080    2040 provision.go:138] copyHostCerts
	I0213 14:50:29.081135    2040 exec_runner.go:144] found /Users/jenkins/minikube-integration/18170-979/.minikube/ca.pem, removing ...
	I0213 14:50:29.081138    2040 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18170-979/.minikube/ca.pem
	I0213 14:50:29.081256    2040 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18170-979/.minikube/ca.pem (1078 bytes)
	I0213 14:50:29.081432    2040 exec_runner.go:144] found /Users/jenkins/minikube-integration/18170-979/.minikube/cert.pem, removing ...
	I0213 14:50:29.081433    2040 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18170-979/.minikube/cert.pem
	I0213 14:50:29.081497    2040 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18170-979/.minikube/cert.pem (1123 bytes)
	I0213 14:50:29.081597    2040 exec_runner.go:144] found /Users/jenkins/minikube-integration/18170-979/.minikube/key.pem, removing ...
	I0213 14:50:29.081599    2040 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18170-979/.minikube/key.pem
	I0213 14:50:29.081739    2040 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18170-979/.minikube/key.pem (1675 bytes)
	I0213 14:50:29.081866    2040 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18170-979/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca-key.pem org=jenkins.functional-023000 san=[192.168.105.4 192.168.105.4 localhost 127.0.0.1 minikube functional-023000]
	I0213 14:50:29.276735    2040 provision.go:172] copyRemoteCerts
	I0213 14:50:29.276774    2040 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 14:50:29.276782    2040 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/functional-023000/id_rsa Username:docker}
	I0213 14:50:29.305203    2040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 14:50:29.312003    2040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 14:50:29.319618    2040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0213 14:50:29.326514    2040 provision.go:86] duration metric: configureAuth took 245.439708ms
	I0213 14:50:29.326520    2040 buildroot.go:189] setting minikube options for container-runtime
	I0213 14:50:29.326638    2040 config.go:182] Loaded profile config "functional-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 14:50:29.326682    2040 main.go:141] libmachine: Using SSH client type: native
	I0213 14:50:29.326899    2040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10529b8e0] 0x10529e050 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0213 14:50:29.326903    2040 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0213 14:50:29.377584    2040 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0213 14:50:29.377589    2040 buildroot.go:70] root file system type: tmpfs
	I0213 14:50:29.377639    2040 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0213 14:50:29.377703    2040 main.go:141] libmachine: Using SSH client type: native
	I0213 14:50:29.377933    2040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10529b8e0] 0x10529e050 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0213 14:50:29.377964    2040 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0213 14:50:29.431655    2040 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0213 14:50:29.431702    2040 main.go:141] libmachine: Using SSH client type: native
	I0213 14:50:29.431935    2040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10529b8e0] 0x10529e050 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0213 14:50:29.431941    2040 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0213 14:50:29.480517    2040 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 14:50:29.480523    2040 machine.go:91] provisioned docker machine in 500.629083ms
	I0213 14:50:29.480527    2040 start.go:300] post-start starting for "functional-023000" (driver="qemu2")
	I0213 14:50:29.480532    2040 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 14:50:29.480569    2040 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 14:50:29.480576    2040 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/functional-023000/id_rsa Username:docker}
	I0213 14:50:29.506671    2040 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 14:50:29.508142    2040 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 14:50:29.508146    2040 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18170-979/.minikube/addons for local assets ...
	I0213 14:50:29.508213    2040 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18170-979/.minikube/files for local assets ...
	I0213 14:50:29.508317    2040 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem -> 14072.pem in /etc/ssl/certs
	I0213 14:50:29.508425    2040 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/test/nested/copy/1407/hosts -> hosts in /etc/test/nested/copy/1407
	I0213 14:50:29.508454    2040 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1407
	I0213 14:50:29.511714    2040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem --> /etc/ssl/certs/14072.pem (1708 bytes)
	I0213 14:50:29.519284    2040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/test/nested/copy/1407/hosts --> /etc/test/nested/copy/1407/hosts (40 bytes)
	I0213 14:50:29.526380    2040 start.go:303] post-start completed in 45.850667ms
	I0213 14:50:29.526384    2040 fix.go:56] fixHost completed within 559.877667ms
	I0213 14:50:29.526412    2040 main.go:141] libmachine: Using SSH client type: native
	I0213 14:50:29.526631    2040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10529b8e0] 0x10529e050 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0213 14:50:29.526634    2040 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 14:50:29.573934    2040 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707864629.676266102
	
	I0213 14:50:29.573939    2040 fix.go:206] guest clock: 1707864629.676266102
	I0213 14:50:29.573942    2040 fix.go:219] Guest: 2024-02-13 14:50:29.676266102 -0800 PST Remote: 2024-02-13 14:50:29.526385 -0800 PST m=+0.660441418 (delta=149.881102ms)
	I0213 14:50:29.573950    2040 fix.go:190] guest clock delta is within tolerance: 149.881102ms
	I0213 14:50:29.573952    2040 start.go:83] releasing machines lock for "functional-023000", held for 607.453334ms
	I0213 14:50:29.574199    2040 ssh_runner.go:195] Run: cat /version.json
	I0213 14:50:29.574205    2040 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/functional-023000/id_rsa Username:docker}
	I0213 14:50:29.574237    2040 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 14:50:29.574255    2040 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/functional-023000/id_rsa Username:docker}
	I0213 14:50:29.600211    2040 ssh_runner.go:195] Run: systemctl --version
	I0213 14:50:29.602523    2040 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 14:50:29.645822    2040 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 14:50:29.645861    2040 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 14:50:29.648573    2040 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0213 14:50:29.648578    2040 start.go:475] detecting cgroup driver to use...
	I0213 14:50:29.648639    2040 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 14:50:29.653820    2040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0213 14:50:29.656647    2040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 14:50:29.660022    2040 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 14:50:29.660041    2040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 14:50:29.663700    2040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 14:50:29.667356    2040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 14:50:29.670979    2040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 14:50:29.673889    2040 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 14:50:29.676791    2040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 14:50:29.680070    2040 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 14:50:29.682908    2040 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 14:50:29.687616    2040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 14:50:29.786100    2040 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 14:50:29.792219    2040 start.go:475] detecting cgroup driver to use...
	I0213 14:50:29.792264    2040 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 14:50:29.799468    2040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 14:50:29.804435    2040 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 14:50:29.810733    2040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 14:50:29.815645    2040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 14:50:29.820425    2040 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 14:50:29.826046    2040 ssh_runner.go:195] Run: which cri-dockerd
	I0213 14:50:29.827394    2040 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 14:50:29.830053    2040 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 14:50:29.835134    2040 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 14:50:29.923903    2040 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 14:50:30.032840    2040 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 14:50:30.032903    2040 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 14:50:30.038218    2040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 14:50:30.129304    2040 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 14:50:41.408118    2040 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.27914525s)
	I0213 14:50:41.408179    2040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0213 14:50:41.413171    2040 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0213 14:50:41.419339    2040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 14:50:41.424701    2040 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0213 14:50:41.479692    2040 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0213 14:50:41.564269    2040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 14:50:41.643459    2040 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0213 14:50:41.649933    2040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 14:50:41.654690    2040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 14:50:41.743644    2040 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0213 14:50:41.769683    2040 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0213 14:50:41.769744    2040 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0213 14:50:41.771725    2040 start.go:543] Will wait 60s for crictl version
	I0213 14:50:41.771754    2040 ssh_runner.go:195] Run: which crictl
	I0213 14:50:41.773061    2040 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 14:50:41.789486    2040 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0213 14:50:41.789565    2040 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 14:50:41.801063    2040 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 14:50:41.812478    2040 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0213 14:50:41.812556    2040 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0213 14:50:41.820739    2040 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0213 14:50:41.825675    2040 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 14:50:41.825712    2040 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 14:50:41.832647    2040 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-023000
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0213 14:50:41.832657    2040 docker.go:615] Images already preloaded, skipping extraction
	I0213 14:50:41.832699    2040 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 14:50:41.838472    2040 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-023000
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0213 14:50:41.838478    2040 cache_images.go:84] Images are preloaded, skipping loading
	I0213 14:50:41.838528    2040 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 14:50:41.846217    2040 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0213 14:50:41.846229    2040 cni.go:84] Creating CNI manager for ""
	I0213 14:50:41.846234    2040 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 14:50:41.846239    2040 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 14:50:41.846247    2040 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-023000 NodeName:functional-023000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 14:50:41.846320    2040 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-023000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 14:50:41.846348    2040 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=functional-023000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:functional-023000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0213 14:50:41.846402    2040 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 14:50:41.849848    2040 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 14:50:41.849875    2040 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 14:50:41.852799    2040 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0213 14:50:41.858035    2040 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 14:50:41.863396    2040 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1953 bytes)
	I0213 14:50:41.868429    2040 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0213 14:50:41.869785    2040 certs.go:56] Setting up /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000 for IP: 192.168.105.4
	I0213 14:50:41.869792    2040 certs.go:190] acquiring lock for shared ca certs: {Name:mk65e421691b8fb2c09fb65e08f20f9a769da9f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:50:41.869910    2040 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18170-979/.minikube/ca.key
	I0213 14:50:41.869950    2040 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.key
	I0213 14:50:41.869996    2040 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/client.key
	I0213 14:50:41.870042    2040 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/apiserver.key.942c473b
	I0213 14:50:41.870082    2040 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/proxy-client.key
	I0213 14:50:41.870207    2040 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/1407.pem (1338 bytes)
	W0213 14:50:41.870234    2040 certs.go:433] ignoring /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/1407_empty.pem, impossibly tiny 0 bytes
	I0213 14:50:41.870239    2040 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 14:50:41.870255    2040 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem (1078 bytes)
	I0213 14:50:41.870281    2040 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem (1123 bytes)
	I0213 14:50:41.870295    2040 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/key.pem (1675 bytes)
	I0213 14:50:41.870331    2040 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem (1708 bytes)
	I0213 14:50:41.870647    2040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 14:50:41.877740    2040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 14:50:41.884924    2040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 14:50:41.891665    2040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 14:50:41.898719    2040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 14:50:41.906304    2040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 14:50:41.913705    2040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 14:50:41.920848    2040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0213 14:50:41.927839    2040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/certs/1407.pem --> /usr/share/ca-certificates/1407.pem (1338 bytes)
	I0213 14:50:41.934865    2040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem --> /usr/share/ca-certificates/14072.pem (1708 bytes)
	I0213 14:50:41.942144    2040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 14:50:41.949210    2040 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 14:50:41.954466    2040 ssh_runner.go:195] Run: openssl version
	I0213 14:50:41.956543    2040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1407.pem && ln -fs /usr/share/ca-certificates/1407.pem /etc/ssl/certs/1407.pem"
	I0213 14:50:41.959478    2040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1407.pem
	I0213 14:50:41.962422    2040 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:48 /usr/share/ca-certificates/1407.pem
	I0213 14:50:41.962452    2040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1407.pem
	I0213 14:50:41.965131    2040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1407.pem /etc/ssl/certs/51391683.0"
	I0213 14:50:41.969427    2040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14072.pem && ln -fs /usr/share/ca-certificates/14072.pem /etc/ssl/certs/14072.pem"
	I0213 14:50:41.973421    2040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14072.pem
	I0213 14:50:41.975664    2040 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:48 /usr/share/ca-certificates/14072.pem
	I0213 14:50:41.975695    2040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14072.pem
	I0213 14:50:41.977741    2040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14072.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 14:50:41.981981    2040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 14:50:41.986622    2040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 14:50:41.989114    2040 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 22:40 /usr/share/ca-certificates/minikubeCA.pem
	I0213 14:50:41.989148    2040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 14:50:41.991225    2040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 14:50:41.994020    2040 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 14:50:41.995552    2040 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 14:50:41.997555    2040 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 14:50:41.999510    2040 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 14:50:42.001302    2040 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 14:50:42.003093    2040 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 14:50:42.005040    2040 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 14:50:42.006892    2040 kubeadm.go:404] StartCluster: {Name:functional-023000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:functional-023000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 14:50:42.006957    2040 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 14:50:42.015606    2040 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 14:50:42.018874    2040 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 14:50:42.018882    2040 kubeadm.go:636] restartCluster start
	I0213 14:50:42.018902    2040 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 14:50:42.021925    2040 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 14:50:42.022193    2040 kubeconfig.go:92] found "functional-023000" server: "https://192.168.105.4:8441"
	I0213 14:50:42.022908    2040 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 14:50:42.026172    2040 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0213 14:50:42.026175    2040 kubeadm.go:1135] stopping kube-system containers ...
	I0213 14:50:42.026214    2040 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 14:50:42.033536    2040 docker.go:483] Stopping containers: [e50a56d76c4f f713ef213d1e 8247211c0b94 02a5890ca97a 94ebb14965ec 17ff29fb9034 394efea15479 20a6df9ad125 2e07c86de0d0 f6cfdad2964f 66efa4dc1972 3a00cb470dbb ff65fb976a72 accdbdf8906b f29d30e326ab 0d71327388f8 26ff15f74afb c78456f8d405 149cb5ac9c13 2678f81a699f a266f2accf89 2c31b9ee3dd6 b053d9fb6277 56d42047f742 3b05555d410c 6c13f401b629 a5166c4dde63 4a73f22abd5c]
	I0213 14:50:42.033591    2040 ssh_runner.go:195] Run: docker stop e50a56d76c4f f713ef213d1e 8247211c0b94 02a5890ca97a 94ebb14965ec 17ff29fb9034 394efea15479 20a6df9ad125 2e07c86de0d0 f6cfdad2964f 66efa4dc1972 3a00cb470dbb ff65fb976a72 accdbdf8906b f29d30e326ab 0d71327388f8 26ff15f74afb c78456f8d405 149cb5ac9c13 2678f81a699f a266f2accf89 2c31b9ee3dd6 b053d9fb6277 56d42047f742 3b05555d410c 6c13f401b629 a5166c4dde63 4a73f22abd5c
	I0213 14:50:42.040389    2040 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 14:50:42.117535    2040 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 14:50:42.121508    2040 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Feb 13 22:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Feb 13 22:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Feb 13 22:49 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Feb 13 22:49 /etc/kubernetes/scheduler.conf
	
	I0213 14:50:42.121547    2040 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0213 14:50:42.124854    2040 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0213 14:50:42.127815    2040 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0213 14:50:42.130659    2040 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0213 14:50:42.130683    2040 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0213 14:50:42.133793    2040 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0213 14:50:42.136516    2040 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0213 14:50:42.136535    2040 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0213 14:50:42.139198    2040 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 14:50:42.142306    2040 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 14:50:42.142309    2040 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 14:50:42.164372    2040 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 14:50:42.464278    2040 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 14:50:42.590265    2040 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 14:50:42.622194    2040 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 14:50:42.656109    2040 api_server.go:52] waiting for apiserver process to appear ...
	I0213 14:50:42.656175    2040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 14:50:43.158217    2040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 14:50:43.658203    2040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 14:50:43.662740    2040 api_server.go:72] duration metric: took 1.006663375s to wait for apiserver process to appear ...
	I0213 14:50:43.662747    2040 api_server.go:88] waiting for apiserver healthz status ...
	I0213 14:50:43.662754    2040 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0213 14:50:45.923657    2040 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 14:50:45.923668    2040 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 14:50:45.923673    2040 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0213 14:50:45.935889    2040 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 14:50:45.935891    2040 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 14:50:46.164751    2040 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0213 14:50:46.168298    2040 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 14:50:46.168305    2040 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 14:50:46.664712    2040 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0213 14:50:46.669495    2040 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 14:50:46.669505    2040 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 14:50:47.164689    2040 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0213 14:50:47.167909    2040 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0213 14:50:47.173293    2040 api_server.go:141] control plane version: v1.28.4
	I0213 14:50:47.173302    2040 api_server.go:131] duration metric: took 3.51066075s to wait for apiserver health ...
	I0213 14:50:47.173306    2040 cni.go:84] Creating CNI manager for ""
	I0213 14:50:47.173312    2040 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 14:50:47.230632    2040 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 14:50:47.235025    2040 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 14:50:47.239874    2040 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 14:50:47.245183    2040 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 14:50:47.250065    2040 system_pods.go:59] 7 kube-system pods found
	I0213 14:50:47.250076    2040 system_pods.go:61] "coredns-5dd5756b68-j4dqh" [cc6922f7-7809-42a6-af44-b7aae85d55e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 14:50:47.250079    2040 system_pods.go:61] "etcd-functional-023000" [19eb7e0e-ace2-4fdf-ada4-2b6783a79c65] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 14:50:47.250082    2040 system_pods.go:61] "kube-apiserver-functional-023000" [c8ead5b0-47fa-44c5-b822-179729f2772a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 14:50:47.250085    2040 system_pods.go:61] "kube-controller-manager-functional-023000" [9b158f36-e940-4720-8805-a03004fbc530] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 14:50:47.250087    2040 system_pods.go:61] "kube-proxy-k7hxc" [aab0d5c3-c390-4ea8-942c-1b8d2a727ec3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0213 14:50:47.250089    2040 system_pods.go:61] "kube-scheduler-functional-023000" [82258949-a1b8-40ae-8933-82ed7c91d9b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 14:50:47.250092    2040 system_pods.go:61] "storage-provisioner" [76f674ca-abf0-4a30-9686-1037b7406533] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0213 14:50:47.250098    2040 system_pods.go:74] duration metric: took 4.910709ms to wait for pod list to return data ...
	I0213 14:50:47.250101    2040 node_conditions.go:102] verifying NodePressure condition ...
	I0213 14:50:47.251973    2040 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0213 14:50:47.251981    2040 node_conditions.go:123] node cpu capacity is 2
	I0213 14:50:47.251986    2040 node_conditions.go:105] duration metric: took 1.882458ms to run NodePressure ...
	I0213 14:50:47.251993    2040 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 14:50:47.333113    2040 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 14:50:47.335377    2040 kubeadm.go:787] kubelet initialised
	I0213 14:50:47.335381    2040 kubeadm.go:788] duration metric: took 2.260291ms waiting for restarted kubelet to initialise ...
	I0213 14:50:47.335384    2040 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 14:50:47.338035    2040 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-j4dqh" in "kube-system" namespace to be "Ready" ...
	I0213 14:50:49.343086    2040 pod_ready.go:102] pod "coredns-5dd5756b68-j4dqh" in "kube-system" namespace has status "Ready":"False"
	I0213 14:50:51.343434    2040 pod_ready.go:102] pod "coredns-5dd5756b68-j4dqh" in "kube-system" namespace has status "Ready":"False"
	I0213 14:50:52.843329    2040 pod_ready.go:92] pod "coredns-5dd5756b68-j4dqh" in "kube-system" namespace has status "Ready":"True"
	I0213 14:50:52.843334    2040 pod_ready.go:81] duration metric: took 5.505462958s waiting for pod "coredns-5dd5756b68-j4dqh" in "kube-system" namespace to be "Ready" ...
	I0213 14:50:52.843337    2040 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-023000" in "kube-system" namespace to be "Ready" ...
	I0213 14:50:54.848516    2040 pod_ready.go:102] pod "etcd-functional-023000" in "kube-system" namespace has status "Ready":"False"
	I0213 14:50:56.346480    2040 pod_ready.go:92] pod "etcd-functional-023000" in "kube-system" namespace has status "Ready":"True"
	I0213 14:50:56.346486    2040 pod_ready.go:81] duration metric: took 3.503252875s waiting for pod "etcd-functional-023000" in "kube-system" namespace to be "Ready" ...
	I0213 14:50:56.346490    2040 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-023000" in "kube-system" namespace to be "Ready" ...
	I0213 14:50:57.351467    2040 pod_ready.go:92] pod "kube-apiserver-functional-023000" in "kube-system" namespace has status "Ready":"True"
	I0213 14:50:57.351473    2040 pod_ready.go:81] duration metric: took 1.005010625s waiting for pod "kube-apiserver-functional-023000" in "kube-system" namespace to be "Ready" ...
	I0213 14:50:57.351476    2040 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-023000" in "kube-system" namespace to be "Ready" ...
	I0213 14:50:57.353996    2040 pod_ready.go:92] pod "kube-controller-manager-functional-023000" in "kube-system" namespace has status "Ready":"True"
	I0213 14:50:57.353999    2040 pod_ready.go:81] duration metric: took 2.520292ms waiting for pod "kube-controller-manager-functional-023000" in "kube-system" namespace to be "Ready" ...
	I0213 14:50:57.354002    2040 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-k7hxc" in "kube-system" namespace to be "Ready" ...
	I0213 14:50:57.356190    2040 pod_ready.go:92] pod "kube-proxy-k7hxc" in "kube-system" namespace has status "Ready":"True"
	I0213 14:50:57.356193    2040 pod_ready.go:81] duration metric: took 2.188625ms waiting for pod "kube-proxy-k7hxc" in "kube-system" namespace to be "Ready" ...
	I0213 14:50:57.356196    2040 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-023000" in "kube-system" namespace to be "Ready" ...
	I0213 14:50:57.358110    2040 pod_ready.go:92] pod "kube-scheduler-functional-023000" in "kube-system" namespace has status "Ready":"True"
	I0213 14:50:57.358112    2040 pod_ready.go:81] duration metric: took 1.914625ms waiting for pod "kube-scheduler-functional-023000" in "kube-system" namespace to be "Ready" ...
	I0213 14:50:57.358115    2040 pod_ready.go:38] duration metric: took 10.023034333s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 14:50:57.358122    2040 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 14:50:57.361978    2040 ops.go:34] apiserver oom_adj: -16
	I0213 14:50:57.361982    2040 kubeadm.go:640] restartCluster took 15.343567041s
	I0213 14:50:57.361984    2040 kubeadm.go:406] StartCluster complete in 15.355564042s
	I0213 14:50:57.361991    2040 settings.go:142] acquiring lock: {Name:mkdd6397441cfaf6d06a74b65d6ddefdb863237c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:50:57.362061    2040 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 14:50:57.362394    2040 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/kubeconfig: {Name:mkf66d96abab1e512e6f2721c341e70e5b11c9ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:50:57.362606    2040 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 14:50:57.362639    2040 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 14:50:57.362669    2040 addons.go:69] Setting storage-provisioner=true in profile "functional-023000"
	I0213 14:50:57.362679    2040 addons.go:234] Setting addon storage-provisioner=true in "functional-023000"
	W0213 14:50:57.362681    2040 addons.go:243] addon storage-provisioner should already be in state true
	I0213 14:50:57.362681    2040 addons.go:69] Setting default-storageclass=true in profile "functional-023000"
	I0213 14:50:57.362687    2040 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-023000"
	I0213 14:50:57.362705    2040 host.go:66] Checking if "functional-023000" exists ...
	I0213 14:50:57.362706    2040 config.go:182] Loaded profile config "functional-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 14:50:57.363721    2040 addons.go:234] Setting addon default-storageclass=true in "functional-023000"
	W0213 14:50:57.363725    2040 addons.go:243] addon default-storageclass should already be in state true
	I0213 14:50:57.363731    2040 host.go:66] Checking if "functional-023000" exists ...
	I0213 14:50:57.364305    2040 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-023000" context rescaled to 1 replicas
	I0213 14:50:57.364313    2040 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 14:50:57.371687    2040 out.go:177] * Verifying Kubernetes components...
	I0213 14:50:57.374601    2040 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 14:50:57.364852    2040 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 14:50:57.378650    2040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 14:50:57.379918    2040 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/functional-023000/id_rsa Username:docker}
	I0213 14:50:57.378669    2040 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 14:50:57.379930    2040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 14:50:57.379935    2040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 14:50:57.379942    2040 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/functional-023000/id_rsa Username:docker}
	I0213 14:50:57.398254    2040 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0213 14:50:57.400301    2040 node_ready.go:35] waiting up to 6m0s for node "functional-023000" to be "Ready" ...
	I0213 14:50:57.401726    2040 node_ready.go:49] node "functional-023000" has status "Ready":"True"
	I0213 14:50:57.401737    2040 node_ready.go:38] duration metric: took 1.423125ms waiting for node "functional-023000" to be "Ready" ...
	I0213 14:50:57.401739    2040 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 14:50:57.412168    2040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 14:50:57.439807    2040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 14:50:57.548899    2040 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-j4dqh" in "kube-system" namespace to be "Ready" ...
	I0213 14:50:57.775405    2040 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0213 14:50:57.779353    2040 addons.go:505] enable addons completed in 416.725917ms: enabled=[storage-provisioner default-storageclass]
	I0213 14:50:57.946733    2040 pod_ready.go:92] pod "coredns-5dd5756b68-j4dqh" in "kube-system" namespace has status "Ready":"True"
	I0213 14:50:57.946738    2040 pod_ready.go:81] duration metric: took 397.845208ms waiting for pod "coredns-5dd5756b68-j4dqh" in "kube-system" namespace to be "Ready" ...
	I0213 14:50:57.946742    2040 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-023000" in "kube-system" namespace to be "Ready" ...
	I0213 14:50:58.346906    2040 pod_ready.go:92] pod "etcd-functional-023000" in "kube-system" namespace has status "Ready":"True"
	I0213 14:50:58.346913    2040 pod_ready.go:81] duration metric: took 400.180541ms waiting for pod "etcd-functional-023000" in "kube-system" namespace to be "Ready" ...
	I0213 14:50:58.346917    2040 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-023000" in "kube-system" namespace to be "Ready" ...
	I0213 14:50:58.746786    2040 pod_ready.go:92] pod "kube-apiserver-functional-023000" in "kube-system" namespace has status "Ready":"True"
	I0213 14:50:58.746791    2040 pod_ready.go:81] duration metric: took 399.884042ms waiting for pod "kube-apiserver-functional-023000" in "kube-system" namespace to be "Ready" ...
	I0213 14:50:58.746795    2040 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-023000" in "kube-system" namespace to be "Ready" ...
	I0213 14:50:59.146818    2040 pod_ready.go:92] pod "kube-controller-manager-functional-023000" in "kube-system" namespace has status "Ready":"True"
	I0213 14:50:59.146823    2040 pod_ready.go:81] duration metric: took 400.038208ms waiting for pod "kube-controller-manager-functional-023000" in "kube-system" namespace to be "Ready" ...
	I0213 14:50:59.146827    2040 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k7hxc" in "kube-system" namespace to be "Ready" ...
	I0213 14:50:59.546188    2040 pod_ready.go:92] pod "kube-proxy-k7hxc" in "kube-system" namespace has status "Ready":"True"
	I0213 14:50:59.546198    2040 pod_ready.go:81] duration metric: took 399.375875ms waiting for pod "kube-proxy-k7hxc" in "kube-system" namespace to be "Ready" ...
	I0213 14:50:59.546202    2040 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-023000" in "kube-system" namespace to be "Ready" ...
	I0213 14:50:59.945596    2040 pod_ready.go:92] pod "kube-scheduler-functional-023000" in "kube-system" namespace has status "Ready":"True"
	I0213 14:50:59.945601    2040 pod_ready.go:81] duration metric: took 399.408708ms waiting for pod "kube-scheduler-functional-023000" in "kube-system" namespace to be "Ready" ...
	I0213 14:50:59.945606    2040 pod_ready.go:38] duration metric: took 2.543940542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 14:50:59.945618    2040 api_server.go:52] waiting for apiserver process to appear ...
	I0213 14:50:59.945699    2040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 14:50:59.950761    2040 api_server.go:72] duration metric: took 2.586517959s to wait for apiserver process to appear ...
	I0213 14:50:59.950765    2040 api_server.go:88] waiting for apiserver healthz status ...
	I0213 14:50:59.950771    2040 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0213 14:50:59.953769    2040 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0213 14:50:59.954451    2040 api_server.go:141] control plane version: v1.28.4
	I0213 14:50:59.954456    2040 api_server.go:131] duration metric: took 3.6895ms to wait for apiserver health ...
	I0213 14:50:59.954458    2040 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 14:51:00.148913    2040 system_pods.go:59] 7 kube-system pods found
	I0213 14:51:00.148920    2040 system_pods.go:61] "coredns-5dd5756b68-j4dqh" [cc6922f7-7809-42a6-af44-b7aae85d55e6] Running
	I0213 14:51:00.148922    2040 system_pods.go:61] "etcd-functional-023000" [19eb7e0e-ace2-4fdf-ada4-2b6783a79c65] Running
	I0213 14:51:00.148924    2040 system_pods.go:61] "kube-apiserver-functional-023000" [c8ead5b0-47fa-44c5-b822-179729f2772a] Running
	I0213 14:51:00.148926    2040 system_pods.go:61] "kube-controller-manager-functional-023000" [9b158f36-e940-4720-8805-a03004fbc530] Running
	I0213 14:51:00.148928    2040 system_pods.go:61] "kube-proxy-k7hxc" [aab0d5c3-c390-4ea8-942c-1b8d2a727ec3] Running
	I0213 14:51:00.148929    2040 system_pods.go:61] "kube-scheduler-functional-023000" [82258949-a1b8-40ae-8933-82ed7c91d9b7] Running
	I0213 14:51:00.148931    2040 system_pods.go:61] "storage-provisioner" [76f674ca-abf0-4a30-9686-1037b7406533] Running
	I0213 14:51:00.148934    2040 system_pods.go:74] duration metric: took 194.479625ms to wait for pod list to return data ...
	I0213 14:51:00.148937    2040 default_sa.go:34] waiting for default service account to be created ...
	I0213 14:51:00.346861    2040 default_sa.go:45] found service account: "default"
	I0213 14:51:00.346867    2040 default_sa.go:55] duration metric: took 197.934ms for default service account to be created ...
	I0213 14:51:00.346869    2040 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 14:51:00.548902    2040 system_pods.go:86] 7 kube-system pods found
	I0213 14:51:00.548908    2040 system_pods.go:89] "coredns-5dd5756b68-j4dqh" [cc6922f7-7809-42a6-af44-b7aae85d55e6] Running
	I0213 14:51:00.548910    2040 system_pods.go:89] "etcd-functional-023000" [19eb7e0e-ace2-4fdf-ada4-2b6783a79c65] Running
	I0213 14:51:00.548912    2040 system_pods.go:89] "kube-apiserver-functional-023000" [c8ead5b0-47fa-44c5-b822-179729f2772a] Running
	I0213 14:51:00.548914    2040 system_pods.go:89] "kube-controller-manager-functional-023000" [9b158f36-e940-4720-8805-a03004fbc530] Running
	I0213 14:51:00.548916    2040 system_pods.go:89] "kube-proxy-k7hxc" [aab0d5c3-c390-4ea8-942c-1b8d2a727ec3] Running
	I0213 14:51:00.548918    2040 system_pods.go:89] "kube-scheduler-functional-023000" [82258949-a1b8-40ae-8933-82ed7c91d9b7] Running
	I0213 14:51:00.548920    2040 system_pods.go:89] "storage-provisioner" [76f674ca-abf0-4a30-9686-1037b7406533] Running
	I0213 14:51:00.548922    2040 system_pods.go:126] duration metric: took 202.05775ms to wait for k8s-apps to be running ...
	I0213 14:51:00.548925    2040 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 14:51:00.548985    2040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 14:51:00.554269    2040 system_svc.go:56] duration metric: took 5.343083ms WaitForService to wait for kubelet.
	I0213 14:51:00.554274    2040 kubeadm.go:581] duration metric: took 3.190051042s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 14:51:00.554282    2040 node_conditions.go:102] verifying NodePressure condition ...
	I0213 14:51:00.745858    2040 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0213 14:51:00.745864    2040 node_conditions.go:123] node cpu capacity is 2
	I0213 14:51:00.745869    2040 node_conditions.go:105] duration metric: took 191.590875ms to run NodePressure ...
	I0213 14:51:00.745873    2040 start.go:228] waiting for startup goroutines ...
	I0213 14:51:00.745876    2040 start.go:233] waiting for cluster config update ...
	I0213 14:51:00.745880    2040 start.go:242] writing updated cluster config ...
	I0213 14:51:00.746214    2040 ssh_runner.go:195] Run: rm -f paused
	I0213 14:51:00.776283    2040 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0213 14:51:00.781265    2040 out.go:177] * Done! kubectl is now configured to use "functional-023000" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-02-13 22:48:59 UTC, ends at Tue 2024-02-13 22:52:01 UTC. --
	Feb 13 22:51:53 functional-023000 dockerd[6759]: time="2024-02-13T22:51:53.153152623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 13 22:51:53 functional-023000 dockerd[6759]: time="2024-02-13T22:51:53.153161248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 13 22:51:53 functional-023000 dockerd[6759]: time="2024-02-13T22:51:53.153526151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 13 22:51:53 functional-023000 cri-dockerd[6955]: time="2024-02-13T22:51:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c52d66550a57be5a5dd6526d024bf556103c31a6eb8201268f0f299681d2450a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Feb 13 22:51:57 functional-023000 dockerd[6759]: time="2024-02-13T22:51:57.805148033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 13 22:51:57 functional-023000 dockerd[6759]: time="2024-02-13T22:51:57.805206407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 13 22:51:57 functional-023000 dockerd[6759]: time="2024-02-13T22:51:57.805225489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 13 22:51:57 functional-023000 dockerd[6759]: time="2024-02-13T22:51:57.805232989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 13 22:51:57 functional-023000 dockerd[6753]: time="2024-02-13T22:51:57.856087051Z" level=info msg="ignoring event" container=b848b52a82aef10f875ea4d3fb2e95ed74622b2c8065e70a9c5b12d7710ad65b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 13 22:51:57 functional-023000 dockerd[6759]: time="2024-02-13T22:51:57.856348001Z" level=info msg="shim disconnected" id=b848b52a82aef10f875ea4d3fb2e95ed74622b2c8065e70a9c5b12d7710ad65b namespace=moby
	Feb 13 22:51:57 functional-023000 dockerd[6759]: time="2024-02-13T22:51:57.856388249Z" level=warning msg="cleaning up after shim disconnected" id=b848b52a82aef10f875ea4d3fb2e95ed74622b2c8065e70a9c5b12d7710ad65b namespace=moby
	Feb 13 22:51:57 functional-023000 dockerd[6759]: time="2024-02-13T22:51:57.856392666Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 13 22:51:58 functional-023000 cri-dockerd[6955]: time="2024-02-13T22:51:58Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Feb 13 22:51:58 functional-023000 dockerd[6759]: time="2024-02-13T22:51:58.890023174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 13 22:51:58 functional-023000 dockerd[6759]: time="2024-02-13T22:51:58.890052714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 13 22:51:58 functional-023000 dockerd[6759]: time="2024-02-13T22:51:58.890059214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 13 22:51:58 functional-023000 dockerd[6759]: time="2024-02-13T22:51:58.890063339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 13 22:51:58 functional-023000 dockerd[6759]: time="2024-02-13T22:51:58.960634038Z" level=info msg="shim disconnected" id=61b370ef1bcd879dbf61a0fac1ae32b68d5b3447198e73224f04b818800256fa namespace=moby
	Feb 13 22:51:58 functional-023000 dockerd[6753]: time="2024-02-13T22:51:58.960763243Z" level=info msg="ignoring event" container=61b370ef1bcd879dbf61a0fac1ae32b68d5b3447198e73224f04b818800256fa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 13 22:51:58 functional-023000 dockerd[6759]: time="2024-02-13T22:51:58.961051317Z" level=warning msg="cleaning up after shim disconnected" id=61b370ef1bcd879dbf61a0fac1ae32b68d5b3447198e73224f04b818800256fa namespace=moby
	Feb 13 22:51:58 functional-023000 dockerd[6759]: time="2024-02-13T22:51:58.961061734Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 13 22:52:00 functional-023000 dockerd[6753]: time="2024-02-13T22:52:00.458442097Z" level=info msg="ignoring event" container=c52d66550a57be5a5dd6526d024bf556103c31a6eb8201268f0f299681d2450a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 13 22:52:00 functional-023000 dockerd[6759]: time="2024-02-13T22:52:00.458935000Z" level=info msg="shim disconnected" id=c52d66550a57be5a5dd6526d024bf556103c31a6eb8201268f0f299681d2450a namespace=moby
	Feb 13 22:52:00 functional-023000 dockerd[6759]: time="2024-02-13T22:52:00.458964790Z" level=warning msg="cleaning up after shim disconnected" id=c52d66550a57be5a5dd6526d024bf556103c31a6eb8201268f0f299681d2450a namespace=moby
	Feb 13 22:52:00 functional-023000 dockerd[6759]: time="2024-02-13T22:52:00.458968999Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	61b370ef1bcd8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   3 seconds ago        Exited              mount-munger              0                   c52d66550a57b       busybox-mount
	b848b52a82aef       72565bf5bbedf                                                                                         4 seconds ago        Exited              echoserver-arm            3                   9976d404e9a2d       hello-node-759d89bdcc-xzst7
	c51898944fdad       72565bf5bbedf                                                                                         11 seconds ago       Exited              echoserver-arm            2                   fb074f3fdbb53       hello-node-connect-7799dfb7c6-xxjct
	e2b018351c1b8       nginx@sha256:0e1330510a8e57568e7e908b27a50658ae84de9e9f907647cb4628fbc799f938                         18 seconds ago       Running             myfrontend                0                   7cb111cb62aa7       sp-pod
	9abde6462e634       nginx@sha256:f2802c2a9d09c7aa3ace27445dfc5656ff24355da28e7b958074a0111e3fc076                         32 seconds ago       Running             nginx                     0                   63dfdfc03a22b       nginx-svc
	363d1e8b3bbed       97e04611ad434                                                                                         About a minute ago   Running             coredns                   2                   52472a12e5f20       coredns-5dd5756b68-j4dqh
	5b37aa35c1d09       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   818e1f22a2297       storage-provisioner
	d16f36e64f5a4       3ca3ca488cf13                                                                                         About a minute ago   Running             kube-proxy                2                   1a9c5a71636f3       kube-proxy-k7hxc
	d6b40b3486d06       9961cbceaf234                                                                                         About a minute ago   Running             kube-controller-manager   2                   e0a0514050e08       kube-controller-manager-functional-023000
	bd5e16fde3662       04b4c447bb9d4                                                                                         About a minute ago   Running             kube-apiserver            0                   2e91d240d503d       kube-apiserver-functional-023000
	aa914f9a54673       9cdd6470f48c8                                                                                         About a minute ago   Running             etcd                      2                   44de1d116ebfe       etcd-functional-023000
	f7314cfd5f9db       05c284c929889                                                                                         About a minute ago   Running             kube-scheduler            2                   0afc184149278       kube-scheduler-functional-023000
	e50a56d76c4f4       ba04bb24b9575                                                                                         2 minutes ago        Exited              storage-provisioner       1                   3a00cb470dbb1       storage-provisioner
	f713ef213d1e3       9cdd6470f48c8                                                                                         2 minutes ago        Exited              etcd                      1                   2e07c86de0d0d       etcd-functional-023000
	8247211c0b94d       97e04611ad434                                                                                         2 minutes ago        Exited              coredns                   1                   20a6df9ad1250       coredns-5dd5756b68-j4dqh
	02a5890ca97a4       3ca3ca488cf13                                                                                         2 minutes ago        Exited              kube-proxy                1                   66efa4dc19720       kube-proxy-k7hxc
	94ebb14965ec1       05c284c929889                                                                                         2 minutes ago        Exited              kube-scheduler            1                   f6cfdad2964f9       kube-scheduler-functional-023000
	17ff29fb90345       9961cbceaf234                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   ff65fb976a72d       kube-controller-manager-functional-023000
	
	
	==> coredns [363d1e8b3bbe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49313 - 45244 "HINFO IN 2422349660560043206.2224516744457748921. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.008772097s
	[INFO] 10.244.0.1:31121 - 1714 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000109033s
	[INFO] 10.244.0.1:26876 - 7624 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000060412s
	[INFO] 10.244.0.1:13074 - 41700 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.00002229s
	[INFO] 10.244.0.1:60552 - 61458 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.000982965s
	[INFO] 10.244.0.1:62820 - 41601 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000054704s
	[INFO] 10.244.0.1:8906 - 6704 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000076994s
	
	
	==> coredns [8247211c0b94] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60106 - 50238 "HINFO IN 8938237480464064656.3946831428024127251. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009409482s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-023000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-023000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb52fe04bc8b044b129ef2ff27607d20a9fceb93
	                    minikube.k8s.io/name=functional-023000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_13T14_49_16_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Feb 2024 22:49:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-023000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Feb 2024 22:51:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Feb 2024 22:51:47 +0000   Tue, 13 Feb 2024 22:49:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Feb 2024 22:51:47 +0000   Tue, 13 Feb 2024 22:49:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Feb 2024 22:51:47 +0000   Tue, 13 Feb 2024 22:49:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Feb 2024 22:51:47 +0000   Tue, 13 Feb 2024 22:49:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-023000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904700Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904700Ki
	  pods:               110
	System Info:
	  Machine ID:                 150140fd123d4da79335ad7c25c50290
	  System UUID:                150140fd123d4da79335ad7c25c50290
	  Boot ID:                    8ca2d10a-088e-4e65-8c43-644f047c5cc9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-759d89bdcc-xzst7                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  default                     hello-node-connect-7799dfb7c6-xxjct          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	  kube-system                 coredns-5dd5756b68-j4dqh                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m32s
	  kube-system                 etcd-functional-023000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m45s
	  kube-system                 kube-apiserver-functional-023000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-controller-manager-functional-023000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 kube-proxy-k7hxc                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 kube-scheduler-functional-023000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m31s                  kube-proxy       
	  Normal  Starting                 74s                    kube-proxy       
	  Normal  Starting                 2m5s                   kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m50s (x8 over 2m50s)  kubelet          Node functional-023000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m50s (x8 over 2m50s)  kubelet          Node functional-023000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m50s (x7 over 2m50s)  kubelet          Node functional-023000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m45s                  kubelet          Node functional-023000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m45s                  kubelet          Node functional-023000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m45s                  kubelet          Node functional-023000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m45s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m42s                  kubelet          Node functional-023000 status is now: NodeReady
	  Normal  RegisteredNode           2m33s                  node-controller  Node functional-023000 event: Registered Node functional-023000 in Controller
	  Normal  RegisteredNode           113s                   node-controller  Node functional-023000 event: Registered Node functional-023000 in Controller
	  Normal  NodeHasNoDiskPressure    79s (x8 over 79s)      kubelet          Node functional-023000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  79s (x8 over 79s)      kubelet          Node functional-023000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 79s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     79s (x7 over 79s)      kubelet          Node functional-023000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  79s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           63s                    node-controller  Node functional-023000 event: Registered Node functional-023000 in Controller
	
	
	==> dmesg <==
	[  +0.034972] systemd-fstab-generator[3726]: Ignoring "noauto" for root device
	[  +0.139335] systemd-fstab-generator[3760]: Ignoring "noauto" for root device
	[  +0.099326] systemd-fstab-generator[3771]: Ignoring "noauto" for root device
	[  +0.101225] systemd-fstab-generator[3784]: Ignoring "noauto" for root device
	[  +5.069807] kauditd_printk_skb: 38 callbacks suppressed
	[  +6.156726] systemd-fstab-generator[4346]: Ignoring "noauto" for root device
	[  +0.068872] systemd-fstab-generator[4357]: Ignoring "noauto" for root device
	[  +0.078241] systemd-fstab-generator[4368]: Ignoring "noauto" for root device
	[  +0.097736] systemd-fstab-generator[4382]: Ignoring "noauto" for root device
	[  +7.098551] kauditd_printk_skb: 77 callbacks suppressed
	[Feb13 22:50] systemd-fstab-generator[6294]: Ignoring "noauto" for root device
	[  +0.141515] systemd-fstab-generator[6327]: Ignoring "noauto" for root device
	[  +0.107571] systemd-fstab-generator[6338]: Ignoring "noauto" for root device
	[  +0.098193] systemd-fstab-generator[6351]: Ignoring "noauto" for root device
	[  +5.123374] kauditd_printk_skb: 38 callbacks suppressed
	[  +6.242496] systemd-fstab-generator[6912]: Ignoring "noauto" for root device
	[  +0.081953] systemd-fstab-generator[6923]: Ignoring "noauto" for root device
	[  +0.080650] systemd-fstab-generator[6934]: Ignoring "noauto" for root device
	[  +0.098400] systemd-fstab-generator[6948]: Ignoring "noauto" for root device
	[  +0.839152] systemd-fstab-generator[7200]: Ignoring "noauto" for root device
	[  +4.594632] kauditd_printk_skb: 89 callbacks suppressed
	[Feb13 22:51] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.020756] kauditd_printk_skb: 4 callbacks suppressed
	[ +21.519341] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.056088] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [aa914f9a5467] <==
	{"level":"info","ts":"2024-02-13T22:50:43.541104Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-13T22:50:43.541131Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-02-13T22:50:43.541186Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-13T22:50:43.541196Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-13T22:50:43.541199Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-13T22:50:43.541279Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-02-13T22:50:43.541282Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-02-13T22:50:43.541488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-02-13T22:50:43.541508Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-02-13T22:50:43.541548Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T22:50:43.541558Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T22:50:45.411096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-02-13T22:50:45.411322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-02-13T22:50:45.41137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-02-13T22:50:45.411408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-02-13T22:50:45.411426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-02-13T22:50:45.411452Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-02-13T22:50:45.411476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-02-13T22:50:45.413952Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-023000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-13T22:50:45.413961Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T22:50:45.414325Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-13T22:50:45.414377Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-13T22:50:45.41401Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T22:50:45.416988Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-13T22:50:45.417137Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> etcd [f713ef213d1e] <==
	{"level":"info","ts":"2024-02-13T22:49:54.564029Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-02-13T22:49:56.021513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-13T22:49:56.021618Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-13T22:49:56.021652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-02-13T22:49:56.021682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-02-13T22:49:56.021768Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-02-13T22:49:56.021798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-02-13T22:49:56.021827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-02-13T22:49:56.024992Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-023000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-13T22:49:56.025068Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T22:49:56.026123Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-13T22:49:56.026158Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-13T22:49:56.026192Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T22:49:56.027507Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-13T22:49:56.028063Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-02-13T22:50:30.268294Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-02-13T22:50:30.268317Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-023000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-02-13T22:50:30.268356Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-13T22:50:30.268397Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-13T22:50:30.2817Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-13T22:50:30.281728Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-02-13T22:50:30.281747Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-02-13T22:50:30.282988Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-02-13T22:50:30.283015Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-02-13T22:50:30.283019Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-023000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 22:52:01 up 3 min,  0 users,  load average: 0.84, 0.33, 0.12
	Linux functional-023000 5.10.57 #1 SMP PREEMPT Thu Dec 28 19:03:47 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [bd5e16fde366] <==
	I0213 22:50:46.100688       1 shared_informer.go:318] Caches are synced for configmaps
	I0213 22:50:46.100735       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0213 22:50:46.101897       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0213 22:50:46.101909       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0213 22:50:46.101911       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0213 22:50:46.102275       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0213 22:50:46.102289       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0213 22:50:46.102891       1 aggregator.go:166] initial CRD sync complete...
	I0213 22:50:46.102902       1 autoregister_controller.go:141] Starting autoregister controller
	I0213 22:50:46.102904       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0213 22:50:46.102907       1 cache.go:39] Caches are synced for autoregister controller
	I0213 22:50:47.002287       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0213 22:50:47.108312       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0213 22:50:47.108808       1 controller.go:624] quota admission added evaluator for: endpoints
	I0213 22:50:47.113132       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0213 22:50:47.409359       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0213 22:50:47.412642       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0213 22:50:47.425005       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0213 22:50:47.432371       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0213 22:50:47.434749       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0213 22:51:02.382145       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.218.61"}
	I0213 22:51:08.537532       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0213 22:51:08.585224       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.101.252.138"}
	I0213 22:51:25.516424       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.98.254"}
	I0213 22:51:34.959183       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.34.99"}
	
	
	==> kube-controller-manager [17ff29fb9034] <==
	I0213 22:50:08.781635       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0213 22:50:08.781689       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-023000"
	I0213 22:50:08.781718       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0213 22:50:08.781911       1 event.go:307] "Event occurred" object="functional-023000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-023000 event: Registered Node functional-023000 in Controller"
	I0213 22:50:08.782530       1 shared_informer.go:318] Caches are synced for expand
	I0213 22:50:08.783643       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0213 22:50:08.783656       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0213 22:50:08.785087       1 shared_informer.go:318] Caches are synced for daemon sets
	I0213 22:50:08.785891       1 shared_informer.go:318] Caches are synced for job
	I0213 22:50:08.785922       1 shared_informer.go:318] Caches are synced for deployment
	I0213 22:50:08.787222       1 shared_informer.go:318] Caches are synced for TTL
	I0213 22:50:08.788301       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0213 22:50:08.792559       1 shared_informer.go:318] Caches are synced for PVC protection
	I0213 22:50:08.798246       1 shared_informer.go:318] Caches are synced for namespace
	I0213 22:50:08.800360       1 shared_informer.go:318] Caches are synced for PV protection
	I0213 22:50:08.802500       1 shared_informer.go:318] Caches are synced for cronjob
	I0213 22:50:08.804644       1 shared_informer.go:318] Caches are synced for ephemeral
	I0213 22:50:08.889414       1 shared_informer.go:318] Caches are synced for attach detach
	I0213 22:50:08.901765       1 shared_informer.go:318] Caches are synced for endpoint
	I0213 22:50:08.952605       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0213 22:50:08.960966       1 shared_informer.go:318] Caches are synced for resource quota
	I0213 22:50:09.004743       1 shared_informer.go:318] Caches are synced for resource quota
	I0213 22:50:09.320915       1 shared_informer.go:318] Caches are synced for garbage collector
	I0213 22:50:09.400306       1 shared_informer.go:318] Caches are synced for garbage collector
	I0213 22:50:09.400317       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [d6b40b3486d0] <==
	I0213 22:50:59.138381       1 shared_informer.go:318] Caches are synced for garbage collector
	I0213 22:50:59.138391       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0213 22:51:08.539815       1 event.go:307] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-759d89bdcc to 1"
	I0213 22:51:08.551080       1 event.go:307] "Event occurred" object="default/hello-node-759d89bdcc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-759d89bdcc-xzst7"
	I0213 22:51:08.556223       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="16.755399ms"
	I0213 22:51:08.558533       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="2.283233ms"
	I0213 22:51:08.560253       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="28.828µs"
	I0213 22:51:08.564777       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="37.451µs"
	I0213 22:51:17.090133       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="31.538µs"
	I0213 22:51:18.101390       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="46.493µs"
	I0213 22:51:19.109958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="23.83µs"
	I0213 22:51:30.341782       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'k8s.io/minikube-hostpath' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0213 22:51:33.245894       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="38.664µs"
	I0213 22:51:34.917142       1 event.go:307] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-7799dfb7c6 to 1"
	I0213 22:51:34.922426       1 event.go:307] "Event occurred" object="default/hello-node-connect-7799dfb7c6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-7799dfb7c6-xxjct"
	I0213 22:51:34.925048       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="8.094459ms"
	I0213 22:51:34.927189       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="2.122711ms"
	I0213 22:51:34.927210       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="10.499µs"
	I0213 22:51:34.933853       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="18.082µs"
	I0213 22:51:36.273914       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="28.082µs"
	I0213 22:51:37.287942       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="23.248µs"
	I0213 22:51:46.768812       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="24.624µs"
	I0213 22:51:50.764114       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="27.498µs"
	I0213 22:51:51.368378       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="28.541µs"
	I0213 22:51:58.407591       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="27.624µs"
	
	
	==> kube-proxy [02a5890ca97a] <==
	I0213 22:49:54.969098       1 server_others.go:69] "Using iptables proxy"
	I0213 22:49:56.701602       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0213 22:49:56.711121       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0213 22:49:56.711135       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0213 22:49:56.711707       1 server_others.go:152] "Using iptables Proxier"
	I0213 22:49:56.711729       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0213 22:49:56.711811       1 server.go:846] "Version info" version="v1.28.4"
	I0213 22:49:56.711819       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0213 22:49:56.712672       1 config.go:188] "Starting service config controller"
	I0213 22:49:56.712678       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0213 22:49:56.712687       1 config.go:97] "Starting endpoint slice config controller"
	I0213 22:49:56.712689       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0213 22:49:56.712973       1 config.go:315] "Starting node config controller"
	I0213 22:49:56.712977       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0213 22:49:56.813265       1 shared_informer.go:318] Caches are synced for node config
	I0213 22:49:56.813364       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0213 22:49:56.813374       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-proxy [d16f36e64f5a] <==
	I0213 22:50:47.302555       1 server_others.go:69] "Using iptables proxy"
	I0213 22:50:47.308296       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0213 22:50:47.329163       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0213 22:50:47.329178       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0213 22:50:47.329951       1 server_others.go:152] "Using iptables Proxier"
	I0213 22:50:47.329970       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0213 22:50:47.330049       1 server.go:846] "Version info" version="v1.28.4"
	I0213 22:50:47.330055       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0213 22:50:47.330875       1 config.go:188] "Starting service config controller"
	I0213 22:50:47.330903       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0213 22:50:47.330912       1 config.go:97] "Starting endpoint slice config controller"
	I0213 22:50:47.330913       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0213 22:50:47.331542       1 config.go:315] "Starting node config controller"
	I0213 22:50:47.331545       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0213 22:50:47.431296       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0213 22:50:47.431335       1 shared_informer.go:318] Caches are synced for service config
	I0213 22:50:47.431603       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [94ebb14965ec] <==
	I0213 22:49:54.962723       1 serving.go:348] Generated self-signed cert in-memory
	W0213 22:49:56.643560       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0213 22:49:56.643653       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0213 22:49:56.643677       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0213 22:49:56.643692       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0213 22:49:56.690571       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0213 22:49:56.690585       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0213 22:49:56.691948       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0213 22:49:56.692102       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0213 22:49:56.692110       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0213 22:49:56.692118       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0213 22:49:56.793425       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0213 22:50:30.264846       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0213 22:50:30.264877       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0213 22:50:30.264959       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f7314cfd5f9d] <==
	I0213 22:50:43.788092       1 serving.go:348] Generated self-signed cert in-memory
	W0213 22:50:46.024034       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0213 22:50:46.024048       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0213 22:50:46.024053       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0213 22:50:46.024056       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0213 22:50:46.067282       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0213 22:50:46.067296       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0213 22:50:46.067951       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0213 22:50:46.067989       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0213 22:50:46.068301       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0213 22:50:46.075330       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0213 22:50:46.168502       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-02-13 22:48:59 UTC, ends at Tue 2024-02-13 22:52:02 UTC. --
	Feb 13 22:51:42 functional-023000 kubelet[7206]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 13 22:51:42 functional-023000 kubelet[7206]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 13 22:51:42 functional-023000 kubelet[7206]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 13 22:51:42 functional-023000 kubelet[7206]: I0213 22:51:42.848127    7206 scope.go:117] "RemoveContainer" containerID="394efea15479f22f9547b4f373b01f6bc221537517c69fc5ac0c5cafa3cebca9"
	Feb 13 22:51:46 functional-023000 kubelet[7206]: I0213 22:51:46.761801    7206 scope.go:117] "RemoveContainer" containerID="906530b8265a3be97ecef3acf07a7d8cb242bd3c897a838b55e69ad9cac78071"
	Feb 13 22:51:46 functional-023000 kubelet[7206]: E0213 22:51:46.761939    7206 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-xzst7_default(d70d9020-134f-4d0e-b670-298fb9f63319)\"" pod="default/hello-node-759d89bdcc-xzst7" podUID="d70d9020-134f-4d0e-b670-298fb9f63319"
	Feb 13 22:51:46 functional-023000 kubelet[7206]: I0213 22:51:46.768290    7206 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=3.70332446 podCreationTimestamp="2024-02-13 22:51:42 +0000 UTC" firstStartedPulling="2024-02-13 22:51:42.892450005 +0000 UTC m=+60.202629269" lastFinishedPulling="2024-02-13 22:51:43.957391061 +0000 UTC m=+61.267570324" observedRunningTime="2024-02-13 22:51:44.326345622 +0000 UTC m=+61.636524885" watchObservedRunningTime="2024-02-13 22:51:46.768265515 +0000 UTC m=+64.078444737"
	Feb 13 22:51:50 functional-023000 kubelet[7206]: I0213 22:51:50.758354    7206 scope.go:117] "RemoveContainer" containerID="ba3535b05a327b925e1cd9c46cb61842d65649f139f6d9dad3f8345b85cc6243"
	Feb 13 22:51:51 functional-023000 kubelet[7206]: I0213 22:51:51.361888    7206 scope.go:117] "RemoveContainer" containerID="ba3535b05a327b925e1cd9c46cb61842d65649f139f6d9dad3f8345b85cc6243"
	Feb 13 22:51:51 functional-023000 kubelet[7206]: I0213 22:51:51.362073    7206 scope.go:117] "RemoveContainer" containerID="c51898944fdad30b6c7957d1dd1764799b5d3bb854969169c1bb78f3d1a4d9f0"
	Feb 13 22:51:51 functional-023000 kubelet[7206]: E0213 22:51:51.362168    7206 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-xxjct_default(a91f83b1-de3a-4e82-bbc1-a07654eda3a2)\"" pod="default/hello-node-connect-7799dfb7c6-xxjct" podUID="a91f83b1-de3a-4e82-bbc1-a07654eda3a2"
	Feb 13 22:51:52 functional-023000 kubelet[7206]: I0213 22:51:52.804054    7206 topology_manager.go:215] "Topology Admit Handler" podUID="05417798-acfc-429f-bd82-07978c196c97" podNamespace="default" podName="busybox-mount"
	Feb 13 22:51:52 functional-023000 kubelet[7206]: I0213 22:51:52.933891    7206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q96gx\" (UniqueName: \"kubernetes.io/projected/05417798-acfc-429f-bd82-07978c196c97-kube-api-access-q96gx\") pod \"busybox-mount\" (UID: \"05417798-acfc-429f-bd82-07978c196c97\") " pod="default/busybox-mount"
	Feb 13 22:51:52 functional-023000 kubelet[7206]: I0213 22:51:52.933914    7206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/05417798-acfc-429f-bd82-07978c196c97-test-volume\") pod \"busybox-mount\" (UID: \"05417798-acfc-429f-bd82-07978c196c97\") " pod="default/busybox-mount"
	Feb 13 22:51:57 functional-023000 kubelet[7206]: I0213 22:51:57.760823    7206 scope.go:117] "RemoveContainer" containerID="906530b8265a3be97ecef3acf07a7d8cb242bd3c897a838b55e69ad9cac78071"
	Feb 13 22:51:58 functional-023000 kubelet[7206]: I0213 22:51:58.400839    7206 scope.go:117] "RemoveContainer" containerID="906530b8265a3be97ecef3acf07a7d8cb242bd3c897a838b55e69ad9cac78071"
	Feb 13 22:51:58 functional-023000 kubelet[7206]: I0213 22:51:58.401004    7206 scope.go:117] "RemoveContainer" containerID="b848b52a82aef10f875ea4d3fb2e95ed74622b2c8065e70a9c5b12d7710ad65b"
	Feb 13 22:51:58 functional-023000 kubelet[7206]: E0213 22:51:58.401094    7206 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-xzst7_default(d70d9020-134f-4d0e-b670-298fb9f63319)\"" pod="default/hello-node-759d89bdcc-xzst7" podUID="d70d9020-134f-4d0e-b670-298fb9f63319"
	Feb 13 22:52:00 functional-023000 kubelet[7206]: I0213 22:52:00.578987    7206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q96gx\" (UniqueName: \"kubernetes.io/projected/05417798-acfc-429f-bd82-07978c196c97-kube-api-access-q96gx\") pod \"05417798-acfc-429f-bd82-07978c196c97\" (UID: \"05417798-acfc-429f-bd82-07978c196c97\") "
	Feb 13 22:52:00 functional-023000 kubelet[7206]: I0213 22:52:00.579008    7206 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/05417798-acfc-429f-bd82-07978c196c97-test-volume\") pod \"05417798-acfc-429f-bd82-07978c196c97\" (UID: \"05417798-acfc-429f-bd82-07978c196c97\") "
	Feb 13 22:52:00 functional-023000 kubelet[7206]: I0213 22:52:00.579027    7206 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05417798-acfc-429f-bd82-07978c196c97-test-volume" (OuterVolumeSpecName: "test-volume") pod "05417798-acfc-429f-bd82-07978c196c97" (UID: "05417798-acfc-429f-bd82-07978c196c97"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Feb 13 22:52:00 functional-023000 kubelet[7206]: I0213 22:52:00.581627    7206 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05417798-acfc-429f-bd82-07978c196c97-kube-api-access-q96gx" (OuterVolumeSpecName: "kube-api-access-q96gx") pod "05417798-acfc-429f-bd82-07978c196c97" (UID: "05417798-acfc-429f-bd82-07978c196c97"). InnerVolumeSpecName "kube-api-access-q96gx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 13 22:52:00 functional-023000 kubelet[7206]: I0213 22:52:00.679365    7206 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-q96gx\" (UniqueName: \"kubernetes.io/projected/05417798-acfc-429f-bd82-07978c196c97-kube-api-access-q96gx\") on node \"functional-023000\" DevicePath \"\""
	Feb 13 22:52:00 functional-023000 kubelet[7206]: I0213 22:52:00.679380    7206 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/05417798-acfc-429f-bd82-07978c196c97-test-volume\") on node \"functional-023000\" DevicePath \"\""
	Feb 13 22:52:01 functional-023000 kubelet[7206]: I0213 22:52:01.424516    7206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c52d66550a57be5a5dd6526d024bf556103c31a6eb8201268f0f299681d2450a"
	
	
	==> storage-provisioner [5b37aa35c1d0] <==
	I0213 22:50:47.321235       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0213 22:50:47.326296       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0213 22:50:47.326351       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0213 22:51:04.710235       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0213 22:51:04.710318       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-023000_e4e56265-a1ef-474e-9bea-dd653b9615d9!
	I0213 22:51:04.710970       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c9d13cc8-965f-4469-903d-2f3eb9b40a9b", APIVersion:"v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-023000_e4e56265-a1ef-474e-9bea-dd653b9615d9 became leader
	I0213 22:51:04.810804       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-023000_e4e56265-a1ef-474e-9bea-dd653b9615d9!
	I0213 22:51:30.342147       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0213 22:51:30.342182       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    db5dde4d-e921-4a39-a898-7067ca84f5c0 390 0 2024-02-13 22:49:30 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-02-13 22:49:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-0dcb69f4-0889-4dc7-8849-9bf645caeeb8 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  0dcb69f4-0889-4dc7-8849-9bf645caeeb8 730 0 2024-02-13 22:51:30 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-02-13 22:51:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-02-13 22:51:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0213 22:51:30.343505       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-0dcb69f4-0889-4dc7-8849-9bf645caeeb8" provisioned
	I0213 22:51:30.343564       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0213 22:51:30.343572       1 volume_store.go:212] Trying to save persistentvolume "pvc-0dcb69f4-0889-4dc7-8849-9bf645caeeb8"
	I0213 22:51:30.344134       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"0dcb69f4-0889-4dc7-8849-9bf645caeeb8", APIVersion:"v1", ResourceVersion:"730", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0213 22:51:30.349401       1 volume_store.go:219] persistentvolume "pvc-0dcb69f4-0889-4dc7-8849-9bf645caeeb8" saved
	I0213 22:51:30.349659       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"0dcb69f4-0889-4dc7-8849-9bf645caeeb8", APIVersion:"v1", ResourceVersion:"730", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-0dcb69f4-0889-4dc7-8849-9bf645caeeb8
	
	
	==> storage-provisioner [e50a56d76c4f] <==
	I0213 22:49:54.824492       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0213 22:49:56.694083       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0213 22:49:56.694155       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0213 22:50:14.087607       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0213 22:50:14.087757       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-023000_4e7c704d-3057-4b4c-964f-6e501b299784!
	I0213 22:50:14.088090       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c9d13cc8-965f-4469-903d-2f3eb9b40a9b", APIVersion:"v1", ResourceVersion:"521", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-023000_4e7c704d-3057-4b4c-964f-6e501b299784 became leader
	I0213 22:50:14.187883       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-023000_4e7c704d-3057-4b4c-964f-6e501b299784!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-023000 -n functional-023000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-023000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-023000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-023000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-023000/192.168.105.4
	Start Time:       Tue, 13 Feb 2024 14:51:52 -0800
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://61b370ef1bcd879dbf61a0fac1ae32b68d5b3447198e73224f04b818800256fa
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 13 Feb 2024 14:51:58 -0800
	      Finished:     Tue, 13 Feb 2024 14:51:58 -0800
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q96gx (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-q96gx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  9s    default-scheduler  Successfully assigned default/busybox-mount to functional-023000
	  Normal  Pulling    9s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 5.593s (5.593s including waiting)
	  Normal  Created    4s    kubelet            Created container mount-munger
	  Normal  Started    4s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (27.39s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-128000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-128000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 7c14d79aabbe
	Removing intermediate container 7c14d79aabbe
	 ---> bb6a87bb449a
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in af40f31f4302
	Removing intermediate container af40f31f4302
	 ---> 185b1a33f8b1
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 869ba7887c3b
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-128000 -n image-128000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-128000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                                        Args                                                        |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| start          | -p functional-023000 --dry-run                                                                                     | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	|                | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| mount          | -p functional-023000                                                                                               | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST |                     |
	|                | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3430333438/001:/mount1 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| mount          | -p functional-023000                                                                                               | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST |                     |
	|                | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3430333438/001:/mount2 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| ssh            | functional-023000 ssh findmnt                                                                                      | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST |                     |
	|                | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| mount          | -p functional-023000                                                                                               | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST |                     |
	|                | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3430333438/001:/mount3 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| start          | -p functional-023000                                                                                               | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST |                     |
	|                | --dry-run --memory                                                                                                 |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                                                            |                   |         |         |                     |                     |
	|                | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                                                                 | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | -p functional-023000                                                                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| ssh            | functional-023000 ssh findmnt                                                                                      | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh            | functional-023000 ssh findmnt                                                                                      | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | -T /mount2                                                                                                         |                   |         |         |                     |                     |
	| ssh            | functional-023000 ssh findmnt                                                                                      | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | -T /mount3                                                                                                         |                   |         |         |                     |                     |
	| mount          | -p functional-023000                                                                                               | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST |                     |
	|                | --kill=true                                                                                                        |                   |         |         |                     |                     |
	| update-context | functional-023000                                                                                                  | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | update-context                                                                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                             |                   |         |         |                     |                     |
	| update-context | functional-023000                                                                                                  | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | update-context                                                                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                             |                   |         |         |                     |                     |
	| update-context | functional-023000                                                                                                  | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | update-context                                                                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                             |                   |         |         |                     |                     |
	| image          | functional-023000                                                                                                  | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | image ls --format short                                                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                  |                   |         |         |                     |                     |
	| image          | functional-023000                                                                                                  | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | image ls --format yaml                                                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                  |                   |         |         |                     |                     |
	| ssh            | functional-023000 ssh pgrep                                                                                        | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST |                     |
	|                | buildkitd                                                                                                          |                   |         |         |                     |                     |
	| image          | functional-023000 image build -t                                                                                   | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | localhost/my-image:functional-023000                                                                               |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                                                                   |                   |         |         |                     |                     |
	| image          | functional-023000 image ls                                                                                         | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	| image          | functional-023000                                                                                                  | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | image ls --format json                                                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                  |                   |         |         |                     |                     |
	| image          | functional-023000                                                                                                  | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | image ls --format table                                                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                  |                   |         |         |                     |                     |
	| delete         | -p functional-023000                                                                                               | functional-023000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	| start          | -p image-128000 --driver=qemu2                                                                                     | image-128000      | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                |                                                                                                                    |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                                                                                | image-128000      | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | ./testdata/image-build/test-normal                                                                                 |                   |         |         |                     |                     |
	|                | -p image-128000                                                                                                    |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                                                                                | image-128000      | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | --build-opt=build-arg=ENV_A=test_env_str                                                                           |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                                                                                               |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p                                                                                 |                   |         |         |                     |                     |
	|                | image-128000                                                                                                       |                   |         |         |                     |                     |
	|----------------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 14:52:11
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 14:52:11.042432    2434 out.go:291] Setting OutFile to fd 1 ...
	I0213 14:52:11.042536    2434 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:52:11.042544    2434 out.go:304] Setting ErrFile to fd 2...
	I0213 14:52:11.042546    2434 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:52:11.042688    2434 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 14:52:11.043733    2434 out.go:298] Setting JSON to false
	I0213 14:52:11.059863    2434 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1153,"bootTime":1707863578,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 14:52:11.059953    2434 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 14:52:11.063409    2434 out.go:177] * [image-128000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 14:52:11.071407    2434 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 14:52:11.071432    2434 notify.go:220] Checking for updates...
	I0213 14:52:11.078351    2434 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 14:52:11.081427    2434 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 14:52:11.084429    2434 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 14:52:11.087418    2434 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 14:52:11.090354    2434 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 14:52:11.093564    2434 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 14:52:11.097378    2434 out.go:177] * Using the qemu2 driver based on user configuration
	I0213 14:52:11.104335    2434 start.go:298] selected driver: qemu2
	I0213 14:52:11.104339    2434 start.go:902] validating driver "qemu2" against <nil>
	I0213 14:52:11.104344    2434 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 14:52:11.104394    2434 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 14:52:11.108363    2434 out.go:177] * Automatically selected the socket_vmnet network
	I0213 14:52:11.113760    2434 start_flags.go:392] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0213 14:52:11.113850    2434 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 14:52:11.113892    2434 cni.go:84] Creating CNI manager for ""
	I0213 14:52:11.113899    2434 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 14:52:11.113903    2434 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 14:52:11.113909    2434 start_flags.go:321] config:
	{Name:image-128000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:image-128000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 14:52:11.118472    2434 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 14:52:11.125397    2434 out.go:177] * Starting control plane node image-128000 in cluster image-128000
	I0213 14:52:11.128328    2434 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 14:52:11.128339    2434 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 14:52:11.128344    2434 cache.go:56] Caching tarball of preloaded images
	I0213 14:52:11.128408    2434 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 14:52:11.128411    2434 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 14:52:11.128639    2434 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/config.json ...
	I0213 14:52:11.128649    2434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/config.json: {Name:mk0598891b6bb46ed5d49b54cd300be08f344919 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:52:11.128952    2434 start.go:365] acquiring machines lock for image-128000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 14:52:11.128982    2434 start.go:369] acquired machines lock for "image-128000" in 27.083µs
	I0213 14:52:11.128992    2434 start.go:93] Provisioning new machine with config: &{Name:image-128000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:image-128000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 14:52:11.129026    2434 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 14:52:11.135292    2434 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0213 14:52:11.159074    2434 start.go:159] libmachine.API.Create for "image-128000" (driver="qemu2")
	I0213 14:52:11.159110    2434 client.go:168] LocalClient.Create starting
	I0213 14:52:11.159184    2434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 14:52:11.159212    2434 main.go:141] libmachine: Decoding PEM data...
	I0213 14:52:11.159227    2434 main.go:141] libmachine: Parsing certificate...
	I0213 14:52:11.159264    2434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 14:52:11.159284    2434 main.go:141] libmachine: Decoding PEM data...
	I0213 14:52:11.159290    2434 main.go:141] libmachine: Parsing certificate...
	I0213 14:52:11.159629    2434 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 14:52:11.278505    2434 main.go:141] libmachine: Creating SSH key...
	I0213 14:52:11.525774    2434 main.go:141] libmachine: Creating Disk image...
	I0213 14:52:11.525781    2434 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 14:52:11.525991    2434 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/image-128000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/image-128000/disk.qcow2
	I0213 14:52:11.547559    2434 main.go:141] libmachine: STDOUT: 
	I0213 14:52:11.547577    2434 main.go:141] libmachine: STDERR: 
	I0213 14:52:11.547629    2434 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/image-128000/disk.qcow2 +20000M
	I0213 14:52:11.558913    2434 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 14:52:11.558934    2434 main.go:141] libmachine: STDERR: 
	I0213 14:52:11.558949    2434 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/image-128000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/image-128000/disk.qcow2
	I0213 14:52:11.558958    2434 main.go:141] libmachine: Starting QEMU VM...
	I0213 14:52:11.558993    2434 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/image-128000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/image-128000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/image-128000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:13:4b:8c:b8:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/image-128000/disk.qcow2
	I0213 14:52:11.605199    2434 main.go:141] libmachine: STDOUT: 
	I0213 14:52:11.605225    2434 main.go:141] libmachine: STDERR: 
	I0213 14:52:11.605228    2434 main.go:141] libmachine: Attempt 0
	I0213 14:52:11.605247    2434 main.go:141] libmachine: Searching for ae:13:4b:8c:b8:4e in /var/db/dhcpd_leases ...
	I0213 14:52:11.605300    2434 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0213 14:52:11.605317    2434 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:86:29:da:72:48:fd ID:1,86:29:da:72:48:fd Lease:0x65cd435b}
	I0213 14:52:11.605324    2434 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:66:80:1d:fd:c1:f3 ID:1,66:80:1d:fd:c1:f3 Lease:0x65cbf1cd}
	I0213 14:52:11.605329    2434 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:22:10:11:d4:80:6e ID:1,22:10:11:d4:80:6e Lease:0x65cbf125}
	I0213 14:52:11.605335    2434 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65cd42ab}
	I0213 14:52:13.607441    2434 main.go:141] libmachine: Attempt 1
	I0213 14:52:13.607487    2434 main.go:141] libmachine: Searching for ae:13:4b:8c:b8:4e in /var/db/dhcpd_leases ...
	I0213 14:52:13.607856    2434 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0213 14:52:13.607899    2434 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:86:29:da:72:48:fd ID:1,86:29:da:72:48:fd Lease:0x65cd435b}
	I0213 14:52:13.607930    2434 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:66:80:1d:fd:c1:f3 ID:1,66:80:1d:fd:c1:f3 Lease:0x65cbf1cd}
	I0213 14:52:13.607955    2434 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:22:10:11:d4:80:6e ID:1,22:10:11:d4:80:6e Lease:0x65cbf125}
	I0213 14:52:13.607982    2434 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65cd42ab}
	I0213 14:52:15.610266    2434 main.go:141] libmachine: Attempt 2
	I0213 14:52:15.610314    2434 main.go:141] libmachine: Searching for ae:13:4b:8c:b8:4e in /var/db/dhcpd_leases ...
	I0213 14:52:15.610590    2434 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0213 14:52:15.610632    2434 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:86:29:da:72:48:fd ID:1,86:29:da:72:48:fd Lease:0x65cd435b}
	I0213 14:52:15.610658    2434 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:66:80:1d:fd:c1:f3 ID:1,66:80:1d:fd:c1:f3 Lease:0x65cbf1cd}
	I0213 14:52:15.610683    2434 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:22:10:11:d4:80:6e ID:1,22:10:11:d4:80:6e Lease:0x65cbf125}
	I0213 14:52:15.610708    2434 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65cd42ab}
	I0213 14:52:17.612812    2434 main.go:141] libmachine: Attempt 3
	I0213 14:52:17.612824    2434 main.go:141] libmachine: Searching for ae:13:4b:8c:b8:4e in /var/db/dhcpd_leases ...
	I0213 14:52:17.612884    2434 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0213 14:52:17.612896    2434 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:86:29:da:72:48:fd ID:1,86:29:da:72:48:fd Lease:0x65cd435b}
	I0213 14:52:17.612901    2434 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:66:80:1d:fd:c1:f3 ID:1,66:80:1d:fd:c1:f3 Lease:0x65cbf1cd}
	I0213 14:52:17.612905    2434 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:22:10:11:d4:80:6e ID:1,22:10:11:d4:80:6e Lease:0x65cbf125}
	I0213 14:52:17.612918    2434 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65cd42ab}
	I0213 14:52:19.614901    2434 main.go:141] libmachine: Attempt 4
	I0213 14:52:19.614906    2434 main.go:141] libmachine: Searching for ae:13:4b:8c:b8:4e in /var/db/dhcpd_leases ...
	I0213 14:52:19.614994    2434 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0213 14:52:19.615009    2434 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:86:29:da:72:48:fd ID:1,86:29:da:72:48:fd Lease:0x65cd435b}
	I0213 14:52:19.615016    2434 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:66:80:1d:fd:c1:f3 ID:1,66:80:1d:fd:c1:f3 Lease:0x65cbf1cd}
	I0213 14:52:19.615021    2434 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:22:10:11:d4:80:6e ID:1,22:10:11:d4:80:6e Lease:0x65cbf125}
	I0213 14:52:19.615025    2434 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65cd42ab}
	I0213 14:52:21.616998    2434 main.go:141] libmachine: Attempt 5
	I0213 14:52:21.617003    2434 main.go:141] libmachine: Searching for ae:13:4b:8c:b8:4e in /var/db/dhcpd_leases ...
	I0213 14:52:21.617035    2434 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0213 14:52:21.617041    2434 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:86:29:da:72:48:fd ID:1,86:29:da:72:48:fd Lease:0x65cd435b}
	I0213 14:52:21.617046    2434 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:66:80:1d:fd:c1:f3 ID:1,66:80:1d:fd:c1:f3 Lease:0x65cbf1cd}
	I0213 14:52:21.617050    2434 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:22:10:11:d4:80:6e ID:1,22:10:11:d4:80:6e Lease:0x65cbf125}
	I0213 14:52:21.617054    2434 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65cd42ab}
	I0213 14:52:23.619043    2434 main.go:141] libmachine: Attempt 6
	I0213 14:52:23.619052    2434 main.go:141] libmachine: Searching for ae:13:4b:8c:b8:4e in /var/db/dhcpd_leases ...
	I0213 14:52:23.619125    2434 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0213 14:52:23.619136    2434 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:86:29:da:72:48:fd ID:1,86:29:da:72:48:fd Lease:0x65cd435b}
	I0213 14:52:23.619141    2434 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:66:80:1d:fd:c1:f3 ID:1,66:80:1d:fd:c1:f3 Lease:0x65cbf1cd}
	I0213 14:52:23.619145    2434 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:22:10:11:d4:80:6e ID:1,22:10:11:d4:80:6e Lease:0x65cbf125}
	I0213 14:52:23.619149    2434 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65cd42ab}
	I0213 14:52:25.621153    2434 main.go:141] libmachine: Attempt 7
	I0213 14:52:25.621162    2434 main.go:141] libmachine: Searching for ae:13:4b:8c:b8:4e in /var/db/dhcpd_leases ...
	I0213 14:52:25.621242    2434 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0213 14:52:25.621251    2434 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ae:13:4b:8c:b8:4e ID:1,ae:13:4b:8c:b8:4e Lease:0x65cd4428}
	I0213 14:52:25.621254    2434 main.go:141] libmachine: Found match: ae:13:4b:8c:b8:4e
	I0213 14:52:25.621260    2434 main.go:141] libmachine: IP: 192.168.105.5
	I0213 14:52:25.621263    2434 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0213 14:52:26.635037    2434 machine.go:88] provisioning docker machine ...
	I0213 14:52:26.635069    2434 buildroot.go:166] provisioning hostname "image-128000"
	I0213 14:52:26.635142    2434 main.go:141] libmachine: Using SSH client type: native
	I0213 14:52:26.635847    2434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10443f8e0] 0x104442050 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0213 14:52:26.635858    2434 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-128000 && echo "image-128000" | sudo tee /etc/hostname
	I0213 14:52:26.670270    2434 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0213 14:52:29.788265    2434 main.go:141] libmachine: SSH cmd err, output: <nil>: image-128000
	
	I0213 14:52:29.788392    2434 main.go:141] libmachine: Using SSH client type: native
	I0213 14:52:29.788906    2434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10443f8e0] 0x104442050 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0213 14:52:29.788919    2434 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-128000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-128000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-128000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 14:52:29.872629    2434 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 14:52:29.872645    2434 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18170-979/.minikube CaCertPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18170-979/.minikube}
	I0213 14:52:29.872657    2434 buildroot.go:174] setting up certificates
	I0213 14:52:29.872672    2434 provision.go:83] configureAuth start
	I0213 14:52:29.872678    2434 provision.go:138] copyHostCerts
	I0213 14:52:29.872820    2434 exec_runner.go:144] found /Users/jenkins/minikube-integration/18170-979/.minikube/ca.pem, removing ...
	I0213 14:52:29.872827    2434 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18170-979/.minikube/ca.pem
	I0213 14:52:29.873048    2434 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18170-979/.minikube/ca.pem (1078 bytes)
	I0213 14:52:29.873372    2434 exec_runner.go:144] found /Users/jenkins/minikube-integration/18170-979/.minikube/cert.pem, removing ...
	I0213 14:52:29.873375    2434 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18170-979/.minikube/cert.pem
	I0213 14:52:29.873452    2434 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18170-979/.minikube/cert.pem (1123 bytes)
	I0213 14:52:29.873631    2434 exec_runner.go:144] found /Users/jenkins/minikube-integration/18170-979/.minikube/key.pem, removing ...
	I0213 14:52:29.873633    2434 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18170-979/.minikube/key.pem
	I0213 14:52:29.873721    2434 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18170-979/.minikube/key.pem (1675 bytes)
	I0213 14:52:29.873873    2434 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18170-979/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca-key.pem org=jenkins.image-128000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-128000]
	I0213 14:52:29.977371    2434 provision.go:172] copyRemoteCerts
	I0213 14:52:29.977397    2434 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 14:52:29.977403    2434 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/image-128000/id_rsa Username:docker}
	I0213 14:52:30.013984    2434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 14:52:30.021508    2434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0213 14:52:30.028429    2434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 14:52:30.035199    2434 provision.go:86] duration metric: configureAuth took 162.527625ms
	I0213 14:52:30.035205    2434 buildroot.go:189] setting minikube options for container-runtime
	I0213 14:52:30.035312    2434 config.go:182] Loaded profile config "image-128000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 14:52:30.035350    2434 main.go:141] libmachine: Using SSH client type: native
	I0213 14:52:30.035572    2434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10443f8e0] 0x104442050 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0213 14:52:30.035575    2434 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0213 14:52:30.106018    2434 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0213 14:52:30.106023    2434 buildroot.go:70] root file system type: tmpfs
	I0213 14:52:30.106076    2434 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0213 14:52:30.106129    2434 main.go:141] libmachine: Using SSH client type: native
	I0213 14:52:30.106381    2434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10443f8e0] 0x104442050 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0213 14:52:30.106415    2434 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0213 14:52:30.179376    2434 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0213 14:52:30.179431    2434 main.go:141] libmachine: Using SSH client type: native
	I0213 14:52:30.179689    2434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10443f8e0] 0x104442050 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0213 14:52:30.179697    2434 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0213 14:52:30.508369    2434 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0213 14:52:30.508378    2434 machine.go:91] provisioned docker machine in 3.873449834s
	I0213 14:52:30.508383    2434 client.go:171] LocalClient.Create took 19.349860292s
	I0213 14:52:30.508398    2434 start.go:167] duration metric: libmachine.API.Create for "image-128000" took 19.349920625s
	I0213 14:52:30.508402    2434 start.go:300] post-start starting for "image-128000" (driver="qemu2")
	I0213 14:52:30.508408    2434 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 14:52:30.508481    2434 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 14:52:30.508488    2434 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/image-128000/id_rsa Username:docker}
	I0213 14:52:30.548343    2434 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 14:52:30.549715    2434 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 14:52:30.549720    2434 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18170-979/.minikube/addons for local assets ...
	I0213 14:52:30.549801    2434 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18170-979/.minikube/files for local assets ...
	I0213 14:52:30.549914    2434 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem -> 14072.pem in /etc/ssl/certs
	I0213 14:52:30.550035    2434 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 14:52:30.553100    2434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem --> /etc/ssl/certs/14072.pem (1708 bytes)
	I0213 14:52:30.560052    2434 start.go:303] post-start completed in 51.647292ms
	I0213 14:52:30.560450    2434 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/config.json ...
	I0213 14:52:30.560625    2434 start.go:128] duration metric: createHost completed in 19.432188208s
	I0213 14:52:30.560646    2434 main.go:141] libmachine: Using SSH client type: native
	I0213 14:52:30.560869    2434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10443f8e0] 0x104442050 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0213 14:52:30.560874    2434 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 14:52:30.629920    2434 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707864750.896266586
	
	I0213 14:52:30.629924    2434 fix.go:206] guest clock: 1707864750.896266586
	I0213 14:52:30.629928    2434 fix.go:219] Guest: 2024-02-13 14:52:30.896266586 -0800 PST Remote: 2024-02-13 14:52:30.560626 -0800 PST m=+19.541259376 (delta=335.640586ms)
	I0213 14:52:30.629937    2434 fix.go:190] guest clock delta is within tolerance: 335.640586ms
	I0213 14:52:30.629939    2434 start.go:83] releasing machines lock for "image-128000", held for 19.501548125s
	I0213 14:52:30.630228    2434 ssh_runner.go:195] Run: cat /version.json
	I0213 14:52:30.630228    2434 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 14:52:30.630234    2434 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/image-128000/id_rsa Username:docker}
	I0213 14:52:30.630247    2434 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/image-128000/id_rsa Username:docker}
	I0213 14:52:30.668655    2434 ssh_runner.go:195] Run: systemctl --version
	I0213 14:52:30.712016    2434 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 14:52:30.714174    2434 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 14:52:30.714204    2434 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 14:52:30.720419    2434 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 14:52:30.720425    2434 start.go:475] detecting cgroup driver to use...
	I0213 14:52:30.720499    2434 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 14:52:30.726672    2434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0213 14:52:30.730246    2434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 14:52:30.733747    2434 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 14:52:30.733775    2434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 14:52:30.736820    2434 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 14:52:30.739514    2434 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 14:52:30.742434    2434 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 14:52:30.745808    2434 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 14:52:30.749139    2434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 14:52:30.752105    2434 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 14:52:30.754758    2434 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 14:52:30.757867    2434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 14:52:30.820544    2434 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 14:52:30.828514    2434 start.go:475] detecting cgroup driver to use...
	I0213 14:52:30.828570    2434 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 14:52:30.835310    2434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 14:52:30.840324    2434 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 14:52:30.849026    2434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 14:52:30.853789    2434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 14:52:30.858801    2434 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0213 14:52:30.898012    2434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 14:52:30.903268    2434 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 14:52:30.908448    2434 ssh_runner.go:195] Run: which cri-dockerd
	I0213 14:52:30.909832    2434 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 14:52:30.912642    2434 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 14:52:30.917600    2434 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 14:52:30.979156    2434 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 14:52:31.061772    2434 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 14:52:31.061828    2434 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 14:52:31.067250    2434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 14:52:31.128096    2434 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 14:52:32.289716    2434 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.161642625s)
	I0213 14:52:32.289769    2434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0213 14:52:32.294495    2434 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0213 14:52:32.300948    2434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 14:52:32.305319    2434 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0213 14:52:32.384774    2434 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0213 14:52:32.443863    2434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 14:52:32.521840    2434 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0213 14:52:32.528362    2434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 14:52:32.532601    2434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 14:52:32.599412    2434 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0213 14:52:32.622106    2434 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0213 14:52:32.622165    2434 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0213 14:52:32.624174    2434 start.go:543] Will wait 60s for crictl version
	I0213 14:52:32.624213    2434 ssh_runner.go:195] Run: which crictl
	I0213 14:52:32.625575    2434 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 14:52:32.645230    2434 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0213 14:52:32.645284    2434 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 14:52:32.654996    2434 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 14:52:32.672537    2434 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0213 14:52:32.672673    2434 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0213 14:52:32.674005    2434 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 14:52:32.678158    2434 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 14:52:32.678194    2434 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 14:52:32.683301    2434 docker.go:685] Got preloaded images: 
	I0213 14:52:32.683306    2434 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0213 14:52:32.683342    2434 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 14:52:32.686483    2434 ssh_runner.go:195] Run: which lz4
	I0213 14:52:32.688055    2434 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 14:52:32.689325    2434 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 14:52:32.689332    2434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (357941720 bytes)
	I0213 14:52:33.982567    2434 docker.go:649] Took 1.294573 seconds to copy over tarball
	I0213 14:52:33.982618    2434 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 14:52:35.040364    2434 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.057752708s)
	I0213 14:52:35.040378    2434 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 14:52:35.055843    2434 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 14:52:35.058966    2434 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0213 14:52:35.064066    2434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 14:52:35.126878    2434 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 14:52:36.596658    2434 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.469812167s)
	I0213 14:52:36.596730    2434 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 14:52:36.602317    2434 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0213 14:52:36.602324    2434 cache_images.go:84] Images are preloaded, skipping loading
	I0213 14:52:36.602378    2434 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 14:52:36.610320    2434 cni.go:84] Creating CNI manager for ""
	I0213 14:52:36.610330    2434 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 14:52:36.610338    2434 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 14:52:36.610345    2434 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-128000 NodeName:image-128000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 14:52:36.610420    2434 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-128000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 14:52:36.610457    2434 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-128000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:image-128000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 14:52:36.610506    2434 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 14:52:36.613808    2434 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 14:52:36.613836    2434 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 14:52:36.616784    2434 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0213 14:52:36.621885    2434 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 14:52:36.626940    2434 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0213 14:52:36.632064    2434 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0213 14:52:36.633270    2434 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 14:52:36.637208    2434 certs.go:56] Setting up /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000 for IP: 192.168.105.5
	I0213 14:52:36.637216    2434 certs.go:190] acquiring lock for shared ca certs: {Name:mk65e421691b8fb2c09fb65e08f20f9a769da9f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:52:36.637383    2434 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18170-979/.minikube/ca.key
	I0213 14:52:36.637429    2434 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.key
	I0213 14:52:36.637459    2434 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/client.key
	I0213 14:52:36.637465    2434 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/client.crt with IP's: []
	I0213 14:52:36.751258    2434 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/client.crt ...
	I0213 14:52:36.751261    2434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/client.crt: {Name:mkffd43433a119d6bf51b7722daa939a52a88d4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:52:36.751512    2434 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/client.key ...
	I0213 14:52:36.751514    2434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/client.key: {Name:mk85b556f831a5b20d6632ed81f8741d322f483c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:52:36.751647    2434 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/apiserver.key.e69b33ca
	I0213 14:52:36.751653    2434 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I0213 14:52:36.918880    2434 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/apiserver.crt.e69b33ca ...
	I0213 14:52:36.918883    2434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/apiserver.crt.e69b33ca: {Name:mk5b523a0e652cbd929f8eb3d845d60280cfaf0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:52:36.919055    2434 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/apiserver.key.e69b33ca ...
	I0213 14:52:36.919057    2434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/apiserver.key.e69b33ca: {Name:mk8cd5f594f1411364dec0df7fd3a1d5ecf2af5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:52:36.919184    2434 certs.go:337] copying /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/apiserver.crt
	I0213 14:52:36.919419    2434 certs.go:341] copying /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/apiserver.key
	I0213 14:52:36.919530    2434 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/proxy-client.key
	I0213 14:52:36.919535    2434 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/proxy-client.crt with IP's: []
	I0213 14:52:36.975240    2434 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/proxy-client.crt ...
	I0213 14:52:36.975242    2434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/proxy-client.crt: {Name:mk0fd661d503e1d5aaa7cf19b03585e8017d78df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:52:36.975380    2434 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/proxy-client.key ...
	I0213 14:52:36.975381    2434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/proxy-client.key: {Name:mkbdc3fda0b659df9c37903c5f6dabf376a5605e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:52:36.975637    2434 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/1407.pem (1338 bytes)
	W0213 14:52:36.975668    2434 certs.go:433] ignoring /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/1407_empty.pem, impossibly tiny 0 bytes
	I0213 14:52:36.975674    2434 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 14:52:36.975693    2434 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem (1078 bytes)
	I0213 14:52:36.975713    2434 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem (1123 bytes)
	I0213 14:52:36.975730    2434 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/key.pem (1675 bytes)
	I0213 14:52:36.975778    2434 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem (1708 bytes)
	I0213 14:52:36.976171    2434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 14:52:36.983730    2434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 14:52:36.990555    2434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 14:52:36.997690    2434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/image-128000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 14:52:37.004682    2434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 14:52:37.011140    2434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 14:52:37.018005    2434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 14:52:37.024965    2434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0213 14:52:37.031674    2434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/certs/1407.pem --> /usr/share/ca-certificates/1407.pem (1338 bytes)
	I0213 14:52:37.038409    2434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem --> /usr/share/ca-certificates/14072.pem (1708 bytes)
	I0213 14:52:37.045578    2434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 14:52:37.052223    2434 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 14:52:37.057152    2434 ssh_runner.go:195] Run: openssl version
	I0213 14:52:37.059114    2434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1407.pem && ln -fs /usr/share/ca-certificates/1407.pem /etc/ssl/certs/1407.pem"
	I0213 14:52:37.062354    2434 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1407.pem
	I0213 14:52:37.064133    2434 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:48 /usr/share/ca-certificates/1407.pem
	I0213 14:52:37.064161    2434 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1407.pem
	I0213 14:52:37.065995    2434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1407.pem /etc/ssl/certs/51391683.0"
	I0213 14:52:37.069529    2434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14072.pem && ln -fs /usr/share/ca-certificates/14072.pem /etc/ssl/certs/14072.pem"
	I0213 14:52:37.072678    2434 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14072.pem
	I0213 14:52:37.074089    2434 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:48 /usr/share/ca-certificates/14072.pem
	I0213 14:52:37.074105    2434 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14072.pem
	I0213 14:52:37.076157    2434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14072.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 14:52:37.079044    2434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 14:52:37.082382    2434 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 14:52:37.083934    2434 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 22:40 /usr/share/ca-certificates/minikubeCA.pem
	I0213 14:52:37.083950    2434 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 14:52:37.085714    2434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 14:52:37.088985    2434 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 14:52:37.090292    2434 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0213 14:52:37.090319    2434 kubeadm.go:404] StartCluster: {Name:image-128000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:image-128000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 14:52:37.090370    2434 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 14:52:37.095986    2434 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 14:52:37.098865    2434 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 14:52:37.102038    2434 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 14:52:37.105203    2434 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 14:52:37.105213    2434 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 14:52:37.126605    2434 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0213 14:52:37.126628    2434 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 14:52:37.181202    2434 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 14:52:37.181264    2434 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 14:52:37.181318    2434 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 14:52:37.275938    2434 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 14:52:37.285129    2434 out.go:204]   - Generating certificates and keys ...
	I0213 14:52:37.285159    2434 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 14:52:37.285188    2434 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 14:52:37.398210    2434 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0213 14:52:37.482999    2434 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0213 14:52:37.687474    2434 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0213 14:52:37.758307    2434 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0213 14:52:37.806640    2434 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0213 14:52:37.806708    2434 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-128000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0213 14:52:38.016879    2434 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0213 14:52:38.016948    2434 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-128000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0213 14:52:38.145448    2434 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0213 14:52:38.245858    2434 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0213 14:52:38.283222    2434 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0213 14:52:38.283249    2434 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 14:52:38.379609    2434 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 14:52:38.419666    2434 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 14:52:38.703462    2434 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 14:52:38.850719    2434 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 14:52:38.850920    2434 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 14:52:38.851974    2434 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 14:52:38.856253    2434 out.go:204]   - Booting up control plane ...
	I0213 14:52:38.856307    2434 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 14:52:38.856347    2434 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 14:52:38.856389    2434 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 14:52:38.859129    2434 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 14:52:38.859643    2434 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 14:52:38.859664    2434 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 14:52:38.924934    2434 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 14:52:42.425729    2434 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.500925 seconds
	I0213 14:52:42.425778    2434 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 14:52:42.430053    2434 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 14:52:42.941324    2434 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 14:52:42.941462    2434 kubeadm.go:322] [mark-control-plane] Marking the node image-128000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 14:52:43.445913    2434 kubeadm.go:322] [bootstrap-token] Using token: noggoe.hr8rknp3dl4slkdt
	I0213 14:52:43.451483    2434 out.go:204]   - Configuring RBAC rules ...
	I0213 14:52:43.451533    2434 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 14:52:43.451573    2434 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 14:52:43.453215    2434 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 14:52:43.454351    2434 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 14:52:43.455386    2434 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 14:52:43.456411    2434 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 14:52:43.460585    2434 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 14:52:43.647334    2434 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 14:52:43.852011    2434 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 14:52:43.852385    2434 kubeadm.go:322] 
	I0213 14:52:43.852420    2434 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 14:52:43.852422    2434 kubeadm.go:322] 
	I0213 14:52:43.852457    2434 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 14:52:43.852459    2434 kubeadm.go:322] 
	I0213 14:52:43.852470    2434 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 14:52:43.852495    2434 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 14:52:43.852525    2434 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 14:52:43.852526    2434 kubeadm.go:322] 
	I0213 14:52:43.852556    2434 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 14:52:43.852559    2434 kubeadm.go:322] 
	I0213 14:52:43.852584    2434 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 14:52:43.852585    2434 kubeadm.go:322] 
	I0213 14:52:43.852623    2434 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 14:52:43.852663    2434 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 14:52:43.852695    2434 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 14:52:43.852697    2434 kubeadm.go:322] 
	I0213 14:52:43.852739    2434 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 14:52:43.852788    2434 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 14:52:43.852792    2434 kubeadm.go:322] 
	I0213 14:52:43.852838    2434 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token noggoe.hr8rknp3dl4slkdt \
	I0213 14:52:43.852885    2434 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59d33d1eb40ab9fa290fa08077c20062eca217e8d74d3cd8b9b4fd2d5d6aeb8d \
	I0213 14:52:43.852893    2434 kubeadm.go:322] 	--control-plane 
	I0213 14:52:43.852895    2434 kubeadm.go:322] 
	I0213 14:52:43.852929    2434 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 14:52:43.852930    2434 kubeadm.go:322] 
	I0213 14:52:43.852962    2434 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token noggoe.hr8rknp3dl4slkdt \
	I0213 14:52:43.853004    2434 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59d33d1eb40ab9fa290fa08077c20062eca217e8d74d3cd8b9b4fd2d5d6aeb8d 
	I0213 14:52:43.853066    2434 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 14:52:43.853073    2434 cni.go:84] Creating CNI manager for ""
	I0213 14:52:43.853080    2434 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 14:52:43.861520    2434 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 14:52:43.864527    2434 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 14:52:43.867648    2434 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 14:52:43.872171    2434 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 14:52:43.872217    2434 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:52:43.872214    2434 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=fb52fe04bc8b044b129ef2ff27607d20a9fceb93 minikube.k8s.io/name=image-128000 minikube.k8s.io/updated_at=2024_02_13T14_52_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:52:43.875256    2434 ops.go:34] apiserver oom_adj: -16
	I0213 14:52:43.940135    2434 kubeadm.go:1088] duration metric: took 67.946625ms to wait for elevateKubeSystemPrivileges.
	I0213 14:52:43.940143    2434 kubeadm.go:406] StartCluster complete in 6.850035s
	I0213 14:52:43.940152    2434 settings.go:142] acquiring lock: {Name:mkdd6397441cfaf6d06a74b65d6ddefdb863237c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:52:43.940222    2434 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 14:52:43.940634    2434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/kubeconfig: {Name:mkf66d96abab1e512e6f2721c341e70e5b11c9ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:52:43.940835    2434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 14:52:43.940890    2434 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 14:52:43.940925    2434 addons.go:69] Setting storage-provisioner=true in profile "image-128000"
	I0213 14:52:43.940931    2434 addons.go:234] Setting addon storage-provisioner=true in "image-128000"
	I0213 14:52:43.940941    2434 config.go:182] Loaded profile config "image-128000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 14:52:43.940951    2434 host.go:66] Checking if "image-128000" exists ...
	I0213 14:52:43.940955    2434 addons.go:69] Setting default-storageclass=true in profile "image-128000"
	I0213 14:52:43.940960    2434 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-128000"
	I0213 14:52:43.941210    2434 retry.go:31] will retry after 642.716258ms: connect: dial unix /Users/jenkins/minikube-integration/18170-979/.minikube/machines/image-128000/monitor: connect: connection refused
	I0213 14:52:43.942091    2434 addons.go:234] Setting addon default-storageclass=true in "image-128000"
	I0213 14:52:43.942098    2434 host.go:66] Checking if "image-128000" exists ...
	I0213 14:52:43.942794    2434 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 14:52:43.942797    2434 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 14:52:43.942801    2434 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/image-128000/id_rsa Username:docker}
	I0213 14:52:43.979207    2434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 14:52:43.987252    2434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 14:52:44.322878    2434 start.go:929] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0213 14:52:44.444743    2434 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-128000" context rescaled to 1 replicas
	I0213 14:52:44.444757    2434 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 14:52:44.449426    2434 out.go:177] * Verifying Kubernetes components...
	I0213 14:52:44.457378    2434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 14:52:44.462935    2434 api_server.go:52] waiting for apiserver process to appear ...
	I0213 14:52:44.462959    2434 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 14:52:44.467217    2434 api_server.go:72] duration metric: took 22.448792ms to wait for apiserver process to appear ...
	I0213 14:52:44.467223    2434 api_server.go:88] waiting for apiserver healthz status ...
	I0213 14:52:44.467229    2434 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0213 14:52:44.470800    2434 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0213 14:52:44.471593    2434 api_server.go:141] control plane version: v1.28.4
	I0213 14:52:44.471598    2434 api_server.go:131] duration metric: took 4.373541ms to wait for apiserver health ...
	I0213 14:52:44.471600    2434 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 14:52:44.477396    2434 system_pods.go:59] 4 kube-system pods found
	I0213 14:52:44.477407    2434 system_pods.go:61] "etcd-image-128000" [a074ba23-8d66-4df5-8922-6831c256aeb4] Pending
	I0213 14:52:44.477409    2434 system_pods.go:61] "kube-apiserver-image-128000" [0b8e77f1-42bf-482c-aa54-e8f6e8b76326] Pending
	I0213 14:52:44.477411    2434 system_pods.go:61] "kube-controller-manager-image-128000" [dd30a294-f817-49b3-bcef-e74e65f1c5f1] Pending
	I0213 14:52:44.477412    2434 system_pods.go:61] "kube-scheduler-image-128000" [b4029b7e-986a-4cfa-918e-23d9a9a2860f] Pending
	I0213 14:52:44.477414    2434 system_pods.go:74] duration metric: took 5.812542ms to wait for pod list to return data ...
	I0213 14:52:44.477418    2434 kubeadm.go:581] duration metric: took 32.653708ms to wait for : map[apiserver:true system_pods:true] ...
	I0213 14:52:44.477423    2434 node_conditions.go:102] verifying NodePressure condition ...
	I0213 14:52:44.478904    2434 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0213 14:52:44.478911    2434 node_conditions.go:123] node cpu capacity is 2
	I0213 14:52:44.478916    2434 node_conditions.go:105] duration metric: took 1.490792ms to run NodePressure ...
	I0213 14:52:44.478920    2434 start.go:228] waiting for startup goroutines ...
	I0213 14:52:44.590028    2434 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 14:52:44.594059    2434 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 14:52:44.594063    2434 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 14:52:44.594069    2434 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/image-128000/id_rsa Username:docker}
	I0213 14:52:44.632918    2434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 14:52:44.843602    2434 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0213 14:52:44.850504    2434 addons.go:505] enable addons completed in 909.642166ms: enabled=[default-storageclass storage-provisioner]
	I0213 14:52:44.850515    2434 start.go:233] waiting for cluster config update ...
	I0213 14:52:44.850519    2434 start.go:242] writing updated cluster config ...
	I0213 14:52:44.850747    2434 ssh_runner.go:195] Run: rm -f paused
	I0213 14:52:44.880325    2434 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0213 14:52:44.883424    2434 out.go:177] * Done! kubectl is now configured to use "image-128000" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-02-13 22:52:24 UTC, ends at Tue 2024-02-13 22:52:51 UTC. --
	Feb 13 22:52:40 image-128000 cri-dockerd[1027]: time="2024-02-13T22:52:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/95eeead0092cc8f0637cc2fd2925a2b78166eabc8ee3b706e2d453cbadde8181/resolv.conf as [nameserver 192.168.105.1]"
	Feb 13 22:52:40 image-128000 dockerd[1139]: time="2024-02-13T22:52:40.265562466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 13 22:52:40 image-128000 dockerd[1139]: time="2024-02-13T22:52:40.265637424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 13 22:52:40 image-128000 dockerd[1139]: time="2024-02-13T22:52:40.265647966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 13 22:52:40 image-128000 dockerd[1139]: time="2024-02-13T22:52:40.265654799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 13 22:52:40 image-128000 cri-dockerd[1027]: time="2024-02-13T22:52:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e3168b7d39b978d0d4083760bd37cc4a855c07098c3c0b69ede5304cae7c302b/resolv.conf as [nameserver 192.168.105.1]"
	Feb 13 22:52:40 image-128000 dockerd[1139]: time="2024-02-13T22:52:40.310878258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 13 22:52:40 image-128000 dockerd[1139]: time="2024-02-13T22:52:40.310932549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 13 22:52:40 image-128000 dockerd[1139]: time="2024-02-13T22:52:40.310940924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 13 22:52:40 image-128000 dockerd[1139]: time="2024-02-13T22:52:40.310951841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 13 22:52:40 image-128000 dockerd[1139]: time="2024-02-13T22:52:40.355644508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 13 22:52:40 image-128000 dockerd[1139]: time="2024-02-13T22:52:40.355970633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 13 22:52:40 image-128000 dockerd[1139]: time="2024-02-13T22:52:40.356007049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 13 22:52:40 image-128000 dockerd[1139]: time="2024-02-13T22:52:40.356068716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 13 22:52:50 image-128000 dockerd[1133]: time="2024-02-13T22:52:50.332576471Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Feb 13 22:52:50 image-128000 dockerd[1133]: time="2024-02-13T22:52:50.460355429Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Feb 13 22:52:50 image-128000 dockerd[1133]: time="2024-02-13T22:52:50.483769012Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Feb 13 22:52:50 image-128000 dockerd[1139]: time="2024-02-13T22:52:50.526470721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 13 22:52:50 image-128000 dockerd[1139]: time="2024-02-13T22:52:50.526790804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 13 22:52:50 image-128000 dockerd[1139]: time="2024-02-13T22:52:50.526804137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 13 22:52:50 image-128000 dockerd[1139]: time="2024-02-13T22:52:50.526808887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 13 22:52:50 image-128000 dockerd[1133]: time="2024-02-13T22:52:50.649207054Z" level=info msg="ignoring event" container=869ba7887c3b8bf810f8189b6b4b94aa5e56250d16acc810b7ba56f4290d5ff9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 13 22:52:50 image-128000 dockerd[1139]: time="2024-02-13T22:52:50.649482304Z" level=info msg="shim disconnected" id=869ba7887c3b8bf810f8189b6b4b94aa5e56250d16acc810b7ba56f4290d5ff9 namespace=moby
	Feb 13 22:52:50 image-128000 dockerd[1139]: time="2024-02-13T22:52:50.649511971Z" level=warning msg="cleaning up after shim disconnected" id=869ba7887c3b8bf810f8189b6b4b94aa5e56250d16acc810b7ba56f4290d5ff9 namespace=moby
	Feb 13 22:52:50 image-128000 dockerd[1139]: time="2024-02-13T22:52:50.649516304Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	30ec7f554dad0       05c284c929889       11 seconds ago      Running             kube-scheduler            0                   e3168b7d39b97       kube-scheduler-image-128000
	0d0b390bf385c       9cdd6470f48c8       11 seconds ago      Running             etcd                      0                   95eeead0092cc       etcd-image-128000
	e2b00fb3fc35c       04b4c447bb9d4       11 seconds ago      Running             kube-apiserver            0                   9118e20258b82       kube-apiserver-image-128000
	2f1b05989ef18       9961cbceaf234       11 seconds ago      Running             kube-controller-manager   0                   0b32c2dced234       kube-controller-manager-image-128000
	
	
	==> describe nodes <==
	Name:               image-128000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-128000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb52fe04bc8b044b129ef2ff27607d20a9fceb93
	                    minikube.k8s.io/name=image-128000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_13T14_52_43_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Feb 2024 22:52:41 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-128000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Feb 2024 22:52:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Feb 2024 22:52:47 +0000   Tue, 13 Feb 2024 22:52:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Feb 2024 22:52:47 +0000   Tue, 13 Feb 2024 22:52:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Feb 2024 22:52:47 +0000   Tue, 13 Feb 2024 22:52:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Feb 2024 22:52:47 +0000   Tue, 13 Feb 2024 22:52:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-128000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904700Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904700Ki
	  pods:               110
	System Info:
	  Machine ID:                 34134ee2aa3a40bbbb21932e9f0eb452
	  System UUID:                34134ee2aa3a40bbbb21932e9f0eb452
	  Boot ID:                    22fc1a92-a669-401f-abfe-1395a001e52f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-128000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7s
	  kube-system                 kube-apiserver-image-128000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kube-controller-manager-image-128000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kube-scheduler-image-128000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 12s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  12s (x8 over 12s)  kubelet  Node image-128000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet  Node image-128000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x7 over 12s)  kubelet  Node image-128000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 8s                 kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  8s                 kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7s                 kubelet  Node image-128000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s                 kubelet  Node image-128000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s                 kubelet  Node image-128000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4s                 kubelet  Node image-128000 status is now: NodeReady
	
	
	==> dmesg <==
	[Feb13 22:52] efi: memattr: Unexpected EFI Memory Attributes table version 2
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.642358] EINJ: EINJ table not found.
	[  +0.531351] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +6.541068] systemd-fstab-generator[488]: Ignoring "noauto" for root device
	[  +0.062916] systemd-fstab-generator[499]: Ignoring "noauto" for root device
	[  +0.447713] systemd-fstab-generator[764]: Ignoring "noauto" for root device
	[  +0.157194] systemd-fstab-generator[801]: Ignoring "noauto" for root device
	[  +0.082063] systemd-fstab-generator[812]: Ignoring "noauto" for root device
	[  +0.067411] systemd-fstab-generator[825]: Ignoring "noauto" for root device
	[  +1.257568] systemd-fstab-generator[984]: Ignoring "noauto" for root device
	[  +0.058100] systemd-fstab-generator[995]: Ignoring "noauto" for root device
	[  +0.076048] systemd-fstab-generator[1006]: Ignoring "noauto" for root device
	[  +0.079215] systemd-fstab-generator[1020]: Ignoring "noauto" for root device
	[  +2.527436] systemd-fstab-generator[1126]: Ignoring "noauto" for root device
	[  +1.453501] kauditd_printk_skb: 185 callbacks suppressed
	[  +2.339213] systemd-fstab-generator[1511]: Ignoring "noauto" for root device
	[  +4.585076] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.037221] systemd-fstab-generator[2361]: Ignoring "noauto" for root device
	[  +6.727139] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [0d0b390bf385] <==
	{"level":"info","ts":"2024-02-13T22:52:40.53169Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-13T22:52:40.531715Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-13T22:52:40.53172Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-13T22:52:40.531958Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2024-02-13T22:52:40.531962Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2024-02-13T22:52:40.532465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2024-02-13T22:52:40.542328Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2024-02-13T22:52:40.986259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-13T22:52:40.986323Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-13T22:52:40.986344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2024-02-13T22:52:40.986357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2024-02-13T22:52:40.986364Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2024-02-13T22:52:40.986375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2024-02-13T22:52:40.986383Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2024-02-13T22:52:40.987683Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-128000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-13T22:52:40.987744Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T22:52:40.988215Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2024-02-13T22:52:40.988327Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T22:52:40.988388Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-13T22:52:40.98852Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-13T22:52:40.988331Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T22:52:40.988875Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T22:52:40.988941Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T22:52:40.988978Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T22:52:40.989068Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 22:52:51 up 0 min,  0 users,  load average: 0.29, 0.06, 0.02
	Linux image-128000 5.10.57 #1 SMP PREEMPT Thu Dec 28 19:03:47 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [e2b00fb3fc35] <==
	I0213 22:52:41.634991       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0213 22:52:41.635134       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0213 22:52:41.635228       1 aggregator.go:166] initial CRD sync complete...
	I0213 22:52:41.635249       1 autoregister_controller.go:141] Starting autoregister controller
	I0213 22:52:41.635262       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0213 22:52:41.635276       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0213 22:52:41.635284       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0213 22:52:41.635303       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0213 22:52:41.635277       1 cache.go:39] Caches are synced for autoregister controller
	I0213 22:52:41.635835       1 controller.go:624] quota admission added evaluator for: namespaces
	I0213 22:52:41.658940       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0213 22:52:41.832467       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0213 22:52:42.537287       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0213 22:52:42.538696       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0213 22:52:42.538701       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0213 22:52:42.667296       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0213 22:52:42.679185       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0213 22:52:42.744015       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0213 22:52:42.746047       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0213 22:52:42.746470       1 controller.go:624] quota admission added evaluator for: endpoints
	I0213 22:52:42.747666       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0213 22:52:43.576498       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0213 22:52:43.909627       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0213 22:52:43.913624       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0213 22:52:43.919651       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [2f1b05989ef1] <==
	I0213 22:52:44.525910       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0213 22:52:44.675458       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0213 22:52:44.675508       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0213 22:52:44.675513       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0213 22:52:44.824987       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0213 22:52:44.825020       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0213 22:52:44.825028       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0213 22:52:44.825032       1 shared_informer.go:318] Caches are synced for token_cleaner
	E0213 22:52:44.975356       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0213 22:52:44.975368       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0213 22:52:45.124855       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0213 22:52:45.124901       1 gc_controller.go:101] "Starting GC controller"
	I0213 22:52:45.124910       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0213 22:52:45.276219       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0213 22:52:45.276253       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0213 22:52:45.276260       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0213 22:52:45.425703       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0213 22:52:45.425771       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0213 22:52:45.425778       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0213 22:52:45.575503       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0213 22:52:45.575580       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0213 22:52:45.575591       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0213 22:52:45.726353       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0213 22:52:45.726536       1 stateful_set.go:161] "Starting stateful set controller"
	I0213 22:52:45.726549       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	
	
	==> kube-scheduler [30ec7f554dad] <==
	W0213 22:52:41.621162       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0213 22:52:41.621170       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0213 22:52:41.621215       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0213 22:52:41.621229       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0213 22:52:41.621242       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0213 22:52:41.621255       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0213 22:52:41.621276       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0213 22:52:41.621284       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0213 22:52:41.621323       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0213 22:52:41.621331       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0213 22:52:41.621364       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0213 22:52:41.621371       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0213 22:52:41.621434       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0213 22:52:41.621442       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0213 22:52:41.621488       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0213 22:52:41.621525       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0213 22:52:41.621569       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0213 22:52:41.621576       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0213 22:52:41.621655       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0213 22:52:41.621670       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0213 22:52:42.477172       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0213 22:52:42.477193       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0213 22:52:42.581481       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0213 22:52:42.581500       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0213 22:52:44.817664       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-02-13 22:52:24 UTC, ends at Tue 2024-02-13 22:52:51 UTC. --
	Feb 13 22:52:44 image-128000 kubelet[2380]: I0213 22:52:44.053245    2380 kubelet_node_status.go:108] "Node was previously registered" node="image-128000"
	Feb 13 22:52:44 image-128000 kubelet[2380]: I0213 22:52:44.053341    2380 kubelet_node_status.go:73] "Successfully registered node" node="image-128000"
	Feb 13 22:52:44 image-128000 kubelet[2380]: I0213 22:52:44.068357    2380 topology_manager.go:215] "Topology Admit Handler" podUID="cc810261121aa3385c36dc1a21bbae2a" podNamespace="kube-system" podName="etcd-image-128000"
	Feb 13 22:52:44 image-128000 kubelet[2380]: I0213 22:52:44.068413    2380 topology_manager.go:215] "Topology Admit Handler" podUID="3a6124153ba38df2b246e68e7601a30c" podNamespace="kube-system" podName="kube-apiserver-image-128000"
	Feb 13 22:52:44 image-128000 kubelet[2380]: I0213 22:52:44.068429    2380 topology_manager.go:215] "Topology Admit Handler" podUID="ebe0e29dec362e595f36a0d394bdc5f0" podNamespace="kube-system" podName="kube-controller-manager-image-128000"
	Feb 13 22:52:44 image-128000 kubelet[2380]: I0213 22:52:44.068443    2380 topology_manager.go:215] "Topology Admit Handler" podUID="25b53d2191cb1980b43b0caf643d08f9" podNamespace="kube-system" podName="kube-scheduler-image-128000"
	Feb 13 22:52:44 image-128000 kubelet[2380]: E0213 22:52:44.074375    2380 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-128000\" already exists" pod="kube-system/kube-apiserver-image-128000"
	Feb 13 22:52:44 image-128000 kubelet[2380]: I0213 22:52:44.146771    2380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3a6124153ba38df2b246e68e7601a30c-ca-certs\") pod \"kube-apiserver-image-128000\" (UID: \"3a6124153ba38df2b246e68e7601a30c\") " pod="kube-system/kube-apiserver-image-128000"
	Feb 13 22:52:44 image-128000 kubelet[2380]: I0213 22:52:44.146807    2380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebe0e29dec362e595f36a0d394bdc5f0-usr-share-ca-certificates\") pod \"kube-controller-manager-image-128000\" (UID: \"ebe0e29dec362e595f36a0d394bdc5f0\") " pod="kube-system/kube-controller-manager-image-128000"
	Feb 13 22:52:44 image-128000 kubelet[2380]: I0213 22:52:44.146818    2380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebe0e29dec362e595f36a0d394bdc5f0-flexvolume-dir\") pod \"kube-controller-manager-image-128000\" (UID: \"ebe0e29dec362e595f36a0d394bdc5f0\") " pod="kube-system/kube-controller-manager-image-128000"
	Feb 13 22:52:44 image-128000 kubelet[2380]: I0213 22:52:44.146828    2380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebe0e29dec362e595f36a0d394bdc5f0-k8s-certs\") pod \"kube-controller-manager-image-128000\" (UID: \"ebe0e29dec362e595f36a0d394bdc5f0\") " pod="kube-system/kube-controller-manager-image-128000"
	Feb 13 22:52:44 image-128000 kubelet[2380]: I0213 22:52:44.146839    2380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebe0e29dec362e595f36a0d394bdc5f0-kubeconfig\") pod \"kube-controller-manager-image-128000\" (UID: \"ebe0e29dec362e595f36a0d394bdc5f0\") " pod="kube-system/kube-controller-manager-image-128000"
	Feb 13 22:52:44 image-128000 kubelet[2380]: I0213 22:52:44.146848    2380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/cc810261121aa3385c36dc1a21bbae2a-etcd-certs\") pod \"etcd-image-128000\" (UID: \"cc810261121aa3385c36dc1a21bbae2a\") " pod="kube-system/etcd-image-128000"
	Feb 13 22:52:44 image-128000 kubelet[2380]: I0213 22:52:44.146858    2380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/cc810261121aa3385c36dc1a21bbae2a-etcd-data\") pod \"etcd-image-128000\" (UID: \"cc810261121aa3385c36dc1a21bbae2a\") " pod="kube-system/etcd-image-128000"
	Feb 13 22:52:44 image-128000 kubelet[2380]: I0213 22:52:44.146883    2380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3a6124153ba38df2b246e68e7601a30c-k8s-certs\") pod \"kube-apiserver-image-128000\" (UID: \"3a6124153ba38df2b246e68e7601a30c\") " pod="kube-system/kube-apiserver-image-128000"
	Feb 13 22:52:44 image-128000 kubelet[2380]: I0213 22:52:44.146896    2380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3a6124153ba38df2b246e68e7601a30c-usr-share-ca-certificates\") pod \"kube-apiserver-image-128000\" (UID: \"3a6124153ba38df2b246e68e7601a30c\") " pod="kube-system/kube-apiserver-image-128000"
	Feb 13 22:52:44 image-128000 kubelet[2380]: I0213 22:52:44.146905    2380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebe0e29dec362e595f36a0d394bdc5f0-ca-certs\") pod \"kube-controller-manager-image-128000\" (UID: \"ebe0e29dec362e595f36a0d394bdc5f0\") " pod="kube-system/kube-controller-manager-image-128000"
	Feb 13 22:52:44 image-128000 kubelet[2380]: I0213 22:52:44.146913    2380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/25b53d2191cb1980b43b0caf643d08f9-kubeconfig\") pod \"kube-scheduler-image-128000\" (UID: \"25b53d2191cb1980b43b0caf643d08f9\") " pod="kube-system/kube-scheduler-image-128000"
	Feb 13 22:52:44 image-128000 kubelet[2380]: I0213 22:52:44.932867    2380 apiserver.go:52] "Watching apiserver"
	Feb 13 22:52:44 image-128000 kubelet[2380]: I0213 22:52:44.946654    2380 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Feb 13 22:52:45 image-128000 kubelet[2380]: I0213 22:52:45.013513    2380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-128000" podStartSLOduration=1.013481551 podCreationTimestamp="2024-02-13 22:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 22:52:45.009404551 +0000 UTC m=+1.112950835" watchObservedRunningTime="2024-02-13 22:52:45.013481551 +0000 UTC m=+1.117027835"
	Feb 13 22:52:45 image-128000 kubelet[2380]: I0213 22:52:45.013569    2380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-128000" podStartSLOduration=1.013560343 podCreationTimestamp="2024-02-13 22:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 22:52:45.01337251 +0000 UTC m=+1.116918793" watchObservedRunningTime="2024-02-13 22:52:45.013560343 +0000 UTC m=+1.117106627"
	Feb 13 22:52:45 image-128000 kubelet[2380]: I0213 22:52:45.020857    2380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-128000" podStartSLOduration=3.020837676 podCreationTimestamp="2024-02-13 22:52:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 22:52:45.017418926 +0000 UTC m=+1.120965210" watchObservedRunningTime="2024-02-13 22:52:45.020837676 +0000 UTC m=+1.124383960"
	Feb 13 22:52:45 image-128000 kubelet[2380]: I0213 22:52:45.026436    2380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-128000" podStartSLOduration=1.026386551 podCreationTimestamp="2024-02-13 22:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 22:52:45.020997843 +0000 UTC m=+1.124544085" watchObservedRunningTime="2024-02-13 22:52:45.026386551 +0000 UTC m=+1.129932835"
	Feb 13 22:52:47 image-128000 kubelet[2380]: I0213 22:52:47.911213    2380 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-128000 -n image-128000
helpers_test.go:261: (dbg) Run:  kubectl --context image-128000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-128000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-128000 describe pod storage-provisioner: exit status 1 (38.54175ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-128000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (56.98s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-632000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-632000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.872520292s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-632000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-632000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [16401e57-bd05-4f30-b23b-eb30cd6a0f17] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [16401e57-bd05-4f30-b23b-eb30cd6a0f17] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.007984125s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-632000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-632000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-632000 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.105.6
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.029771583s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-632000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-632000 addons disable ingress-dns --alsologtostderr -v=1: (11.631687334s)
addons_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-632000 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-632000 addons disable ingress --alsologtostderr -v=1: (7.118530333s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-632000 -n ingress-addon-legacy-632000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-632000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-023000                     | functional-023000           | jenkins | v1.32.0 | 13 Feb 24 14:52 PST |                     |
	|                | --kill=true                              |                             |         |         |                     |                     |
	| update-context | functional-023000                        | functional-023000           | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-023000                        | functional-023000           | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-023000                        | functional-023000           | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| image          | functional-023000                        | functional-023000           | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | image ls --format short                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-023000                        | functional-023000           | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | image ls --format yaml                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| ssh            | functional-023000 ssh pgrep              | functional-023000           | jenkins | v1.32.0 | 13 Feb 24 14:52 PST |                     |
	|                | buildkitd                                |                             |         |         |                     |                     |
	| image          | functional-023000 image build -t         | functional-023000           | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | localhost/my-image:functional-023000     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                             |         |         |                     |                     |
	| image          | functional-023000 image ls               | functional-023000           | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	| image          | functional-023000                        | functional-023000           | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | image ls --format json                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-023000                        | functional-023000           | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | image ls --format table                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| delete         | -p functional-023000                     | functional-023000           | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	| start          | -p image-128000 --driver=qemu2           | image-128000                | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                |                                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-128000                | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | -p image-128000                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-128000                | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                             |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                             |         |         |                     |                     |
	|                | image-128000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-128000                | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | --build-opt=no-cache -p                  |                             |         |         |                     |                     |
	|                | image-128000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-128000                | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	|                | -f inner/Dockerfile                      |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-f            |                             |         |         |                     |                     |
	|                | -p image-128000                          |                             |         |         |                     |                     |
	| delete         | -p image-128000                          | image-128000                | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:52 PST |
	| start          | -p ingress-addon-legacy-632000           | ingress-addon-legacy-632000 | jenkins | v1.32.0 | 13 Feb 24 14:52 PST | 13 Feb 24 14:54 PST |
	|                | --kubernetes-version=v1.18.20            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	|                | --driver=qemu2                           |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-632000              | ingress-addon-legacy-632000 | jenkins | v1.32.0 | 13 Feb 24 14:54 PST | 13 Feb 24 14:55 PST |
	|                | addons enable ingress                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-632000              | ingress-addon-legacy-632000 | jenkins | v1.32.0 | 13 Feb 24 14:55 PST | 13 Feb 24 14:55 PST |
	|                | addons enable ingress-dns                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-632000              | ingress-addon-legacy-632000 | jenkins | v1.32.0 | 13 Feb 24 14:55 PST | 13 Feb 24 14:55 PST |
	|                | ssh curl -s http://127.0.0.1/            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-632000 ip           | ingress-addon-legacy-632000 | jenkins | v1.32.0 | 13 Feb 24 14:55 PST | 13 Feb 24 14:55 PST |
	| addons         | ingress-addon-legacy-632000              | ingress-addon-legacy-632000 | jenkins | v1.32.0 | 13 Feb 24 14:55 PST | 13 Feb 24 14:55 PST |
	|                | addons disable ingress-dns               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-632000              | ingress-addon-legacy-632000 | jenkins | v1.32.0 | 13 Feb 24 14:55 PST | 13 Feb 24 14:55 PST |
	|                | addons disable ingress                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 14:52:51
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 14:52:51.593489    2476 out.go:291] Setting OutFile to fd 1 ...
	I0213 14:52:51.593614    2476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:52:51.593617    2476 out.go:304] Setting ErrFile to fd 2...
	I0213 14:52:51.593620    2476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:52:51.593733    2476 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 14:52:51.594881    2476 out.go:298] Setting JSON to false
	I0213 14:52:51.611027    2476 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1193,"bootTime":1707863578,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 14:52:51.611125    2476 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 14:52:51.615156    2476 out.go:177] * [ingress-addon-legacy-632000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 14:52:51.622089    2476 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 14:52:51.626182    2476 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 14:52:51.622129    2476 notify.go:220] Checking for updates...
	I0213 14:52:51.630063    2476 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 14:52:51.633104    2476 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 14:52:51.636163    2476 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 14:52:51.639054    2476 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 14:52:51.642286    2476 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 14:52:51.646121    2476 out.go:177] * Using the qemu2 driver based on user configuration
	I0213 14:52:51.653093    2476 start.go:298] selected driver: qemu2
	I0213 14:52:51.653099    2476 start.go:902] validating driver "qemu2" against <nil>
	I0213 14:52:51.653104    2476 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 14:52:51.655492    2476 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 14:52:51.659085    2476 out.go:177] * Automatically selected the socket_vmnet network
	I0213 14:52:51.662139    2476 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 14:52:51.662184    2476 cni.go:84] Creating CNI manager for ""
	I0213 14:52:51.662196    2476 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 14:52:51.662204    2476 start_flags.go:321] config:
	{Name:ingress-addon-legacy-632000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-632000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 14:52:51.666897    2476 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 14:52:51.673086    2476 out.go:177] * Starting control plane node ingress-addon-legacy-632000 in cluster ingress-addon-legacy-632000
	I0213 14:52:51.677106    2476 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0213 14:52:52.339425    2476 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0213 14:52:52.339525    2476 cache.go:56] Caching tarball of preloaded images
	I0213 14:52:52.340330    2476 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0213 14:52:52.347770    2476 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0213 14:52:52.351775    2476 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0213 14:52:52.965161    2476 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0213 14:53:13.817271    2476 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0213 14:53:13.817443    2476 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0213 14:53:14.566141    2476 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0213 14:53:14.566333    2476 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/config.json ...
	I0213 14:53:14.566353    2476 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/config.json: {Name:mk6895444abaf78cf5d05cb8f614642d363e4398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:53:14.566600    2476 start.go:365] acquiring machines lock for ingress-addon-legacy-632000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 14:53:14.566629    2476 start.go:369] acquired machines lock for "ingress-addon-legacy-632000" in 24.209µs
	I0213 14:53:14.566639    2476 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-632000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 14:53:14.566681    2476 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 14:53:14.571680    2476 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0213 14:53:14.587825    2476 start.go:159] libmachine.API.Create for "ingress-addon-legacy-632000" (driver="qemu2")
	I0213 14:53:14.587847    2476 client.go:168] LocalClient.Create starting
	I0213 14:53:14.587934    2476 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 14:53:14.587963    2476 main.go:141] libmachine: Decoding PEM data...
	I0213 14:53:14.587973    2476 main.go:141] libmachine: Parsing certificate...
	I0213 14:53:14.588008    2476 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 14:53:14.588029    2476 main.go:141] libmachine: Decoding PEM data...
	I0213 14:53:14.588037    2476 main.go:141] libmachine: Parsing certificate...
	I0213 14:53:14.588379    2476 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 14:53:14.710530    2476 main.go:141] libmachine: Creating SSH key...
	I0213 14:53:14.800670    2476 main.go:141] libmachine: Creating Disk image...
	I0213 14:53:14.800679    2476 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 14:53:14.800872    2476 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/ingress-addon-legacy-632000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/ingress-addon-legacy-632000/disk.qcow2
	I0213 14:53:14.813445    2476 main.go:141] libmachine: STDOUT: 
	I0213 14:53:14.813471    2476 main.go:141] libmachine: STDERR: 
	I0213 14:53:14.813540    2476 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/ingress-addon-legacy-632000/disk.qcow2 +20000M
	I0213 14:53:14.824572    2476 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 14:53:14.824588    2476 main.go:141] libmachine: STDERR: 
	I0213 14:53:14.824610    2476 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/ingress-addon-legacy-632000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/ingress-addon-legacy-632000/disk.qcow2
	I0213 14:53:14.824618    2476 main.go:141] libmachine: Starting QEMU VM...
	I0213 14:53:14.824662    2476 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/ingress-addon-legacy-632000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/ingress-addon-legacy-632000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/ingress-addon-legacy-632000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:7c:73:4b:9d:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/ingress-addon-legacy-632000/disk.qcow2
	I0213 14:53:14.862644    2476 main.go:141] libmachine: STDOUT: 
	I0213 14:53:14.862681    2476 main.go:141] libmachine: STDERR: 
	I0213 14:53:14.862685    2476 main.go:141] libmachine: Attempt 0
	I0213 14:53:14.862707    2476 main.go:141] libmachine: Searching for 2a:7c:73:4b:9d:e in /var/db/dhcpd_leases ...
	I0213 14:53:14.862781    2476 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0213 14:53:14.862799    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ae:13:4b:8c:b8:4e ID:1,ae:13:4b:8c:b8:4e Lease:0x65cd4428}
	I0213 14:53:14.862809    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:86:29:da:72:48:fd ID:1,86:29:da:72:48:fd Lease:0x65cd435b}
	I0213 14:53:14.862815    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:66:80:1d:fd:c1:f3 ID:1,66:80:1d:fd:c1:f3 Lease:0x65cbf1cd}
	I0213 14:53:14.862821    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:22:10:11:d4:80:6e ID:1,22:10:11:d4:80:6e Lease:0x65cbf125}
	I0213 14:53:14.862833    2476 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65cd42ab}
	I0213 14:53:16.864917    2476 main.go:141] libmachine: Attempt 1
	I0213 14:53:16.864993    2476 main.go:141] libmachine: Searching for 2a:7c:73:4b:9d:e in /var/db/dhcpd_leases ...
	I0213 14:53:16.865319    2476 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0213 14:53:16.865395    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ae:13:4b:8c:b8:4e ID:1,ae:13:4b:8c:b8:4e Lease:0x65cd4428}
	I0213 14:53:16.865430    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:86:29:da:72:48:fd ID:1,86:29:da:72:48:fd Lease:0x65cd435b}
	I0213 14:53:16.865464    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:66:80:1d:fd:c1:f3 ID:1,66:80:1d:fd:c1:f3 Lease:0x65cbf1cd}
	I0213 14:53:16.865493    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:22:10:11:d4:80:6e ID:1,22:10:11:d4:80:6e Lease:0x65cbf125}
	I0213 14:53:16.865525    2476 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65cd42ab}
	I0213 14:53:18.867680    2476 main.go:141] libmachine: Attempt 2
	I0213 14:53:18.867800    2476 main.go:141] libmachine: Searching for 2a:7c:73:4b:9d:e in /var/db/dhcpd_leases ...
	I0213 14:53:18.868049    2476 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0213 14:53:18.868147    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ae:13:4b:8c:b8:4e ID:1,ae:13:4b:8c:b8:4e Lease:0x65cd4428}
	I0213 14:53:18.868182    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:86:29:da:72:48:fd ID:1,86:29:da:72:48:fd Lease:0x65cd435b}
	I0213 14:53:18.868214    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:66:80:1d:fd:c1:f3 ID:1,66:80:1d:fd:c1:f3 Lease:0x65cbf1cd}
	I0213 14:53:18.868249    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:22:10:11:d4:80:6e ID:1,22:10:11:d4:80:6e Lease:0x65cbf125}
	I0213 14:53:18.868283    2476 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65cd42ab}
	I0213 14:53:20.870399    2476 main.go:141] libmachine: Attempt 3
	I0213 14:53:20.870439    2476 main.go:141] libmachine: Searching for 2a:7c:73:4b:9d:e in /var/db/dhcpd_leases ...
	I0213 14:53:20.870494    2476 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0213 14:53:20.870513    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ae:13:4b:8c:b8:4e ID:1,ae:13:4b:8c:b8:4e Lease:0x65cd4428}
	I0213 14:53:20.870529    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:86:29:da:72:48:fd ID:1,86:29:da:72:48:fd Lease:0x65cd435b}
	I0213 14:53:20.870535    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:66:80:1d:fd:c1:f3 ID:1,66:80:1d:fd:c1:f3 Lease:0x65cbf1cd}
	I0213 14:53:20.870541    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:22:10:11:d4:80:6e ID:1,22:10:11:d4:80:6e Lease:0x65cbf125}
	I0213 14:53:20.870546    2476 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65cd42ab}
	I0213 14:53:22.872534    2476 main.go:141] libmachine: Attempt 4
	I0213 14:53:22.872546    2476 main.go:141] libmachine: Searching for 2a:7c:73:4b:9d:e in /var/db/dhcpd_leases ...
	I0213 14:53:22.872580    2476 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0213 14:53:22.872586    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ae:13:4b:8c:b8:4e ID:1,ae:13:4b:8c:b8:4e Lease:0x65cd4428}
	I0213 14:53:22.872592    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:86:29:da:72:48:fd ID:1,86:29:da:72:48:fd Lease:0x65cd435b}
	I0213 14:53:22.872598    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:66:80:1d:fd:c1:f3 ID:1,66:80:1d:fd:c1:f3 Lease:0x65cbf1cd}
	I0213 14:53:22.872604    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:22:10:11:d4:80:6e ID:1,22:10:11:d4:80:6e Lease:0x65cbf125}
	I0213 14:53:22.872611    2476 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65cd42ab}
	I0213 14:53:24.874626    2476 main.go:141] libmachine: Attempt 5
	I0213 14:53:24.874661    2476 main.go:141] libmachine: Searching for 2a:7c:73:4b:9d:e in /var/db/dhcpd_leases ...
	I0213 14:53:24.874703    2476 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0213 14:53:24.874719    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ae:13:4b:8c:b8:4e ID:1,ae:13:4b:8c:b8:4e Lease:0x65cd4428}
	I0213 14:53:24.874730    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:86:29:da:72:48:fd ID:1,86:29:da:72:48:fd Lease:0x65cd435b}
	I0213 14:53:24.874736    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:66:80:1d:fd:c1:f3 ID:1,66:80:1d:fd:c1:f3 Lease:0x65cbf1cd}
	I0213 14:53:24.874743    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:22:10:11:d4:80:6e ID:1,22:10:11:d4:80:6e Lease:0x65cbf125}
	I0213 14:53:24.874748    2476 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65cd42ab}
	I0213 14:53:26.876767    2476 main.go:141] libmachine: Attempt 6
	I0213 14:53:26.876787    2476 main.go:141] libmachine: Searching for 2a:7c:73:4b:9d:e in /var/db/dhcpd_leases ...
	I0213 14:53:26.876850    2476 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0213 14:53:26.876860    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:ae:13:4b:8c:b8:4e ID:1,ae:13:4b:8c:b8:4e Lease:0x65cd4428}
	I0213 14:53:26.876866    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:86:29:da:72:48:fd ID:1,86:29:da:72:48:fd Lease:0x65cd435b}
	I0213 14:53:26.876872    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:66:80:1d:fd:c1:f3 ID:1,66:80:1d:fd:c1:f3 Lease:0x65cbf1cd}
	I0213 14:53:26.876877    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:22:10:11:d4:80:6e ID:1,22:10:11:d4:80:6e Lease:0x65cbf125}
	I0213 14:53:26.876883    2476 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65cd42ab}
	I0213 14:53:28.878934    2476 main.go:141] libmachine: Attempt 7
	I0213 14:53:28.878967    2476 main.go:141] libmachine: Searching for 2a:7c:73:4b:9d:e in /var/db/dhcpd_leases ...
	I0213 14:53:28.879078    2476 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I0213 14:53:28.879094    2476 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:2a:7c:73:4b:9d:e ID:1,2a:7c:73:4b:9d:e Lease:0x65cd4467}
	I0213 14:53:28.879098    2476 main.go:141] libmachine: Found match: 2a:7c:73:4b:9d:e
	I0213 14:53:28.879108    2476 main.go:141] libmachine: IP: 192.168.105.6
	I0213 14:53:28.879115    2476 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0213 14:53:29.897984    2476 machine.go:88] provisioning docker machine ...
	I0213 14:53:29.898037    2476 buildroot.go:166] provisioning hostname "ingress-addon-legacy-632000"
	I0213 14:53:29.898215    2476 main.go:141] libmachine: Using SSH client type: native
	I0213 14:53:29.898982    2476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d478e0] 0x100d4a050 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0213 14:53:29.899014    2476 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-632000 && echo "ingress-addon-legacy-632000" | sudo tee /etc/hostname
	I0213 14:53:30.006806    2476 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-632000
	
	I0213 14:53:30.006948    2476 main.go:141] libmachine: Using SSH client type: native
	I0213 14:53:30.007475    2476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d478e0] 0x100d4a050 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0213 14:53:30.007494    2476 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-632000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-632000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-632000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 14:53:30.095698    2476 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 14:53:30.095726    2476 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18170-979/.minikube CaCertPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18170-979/.minikube}
	I0213 14:53:30.095746    2476 buildroot.go:174] setting up certificates
	I0213 14:53:30.095759    2476 provision.go:83] configureAuth start
	I0213 14:53:30.095768    2476 provision.go:138] copyHostCerts
	I0213 14:53:30.095831    2476 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18170-979/.minikube/ca.pem
	I0213 14:53:30.095890    2476 exec_runner.go:144] found /Users/jenkins/minikube-integration/18170-979/.minikube/ca.pem, removing ...
	I0213 14:53:30.095900    2476 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18170-979/.minikube/ca.pem
	I0213 14:53:30.096109    2476 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18170-979/.minikube/ca.pem (1078 bytes)
	I0213 14:53:30.096363    2476 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18170-979/.minikube/cert.pem
	I0213 14:53:30.096392    2476 exec_runner.go:144] found /Users/jenkins/minikube-integration/18170-979/.minikube/cert.pem, removing ...
	I0213 14:53:30.096397    2476 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18170-979/.minikube/cert.pem
	I0213 14:53:30.096497    2476 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18170-979/.minikube/cert.pem (1123 bytes)
	I0213 14:53:30.096655    2476 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18170-979/.minikube/key.pem
	I0213 14:53:30.096701    2476 exec_runner.go:144] found /Users/jenkins/minikube-integration/18170-979/.minikube/key.pem, removing ...
	I0213 14:53:30.096706    2476 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18170-979/.minikube/key.pem
	I0213 14:53:30.096786    2476 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18170-979/.minikube/key.pem (1675 bytes)
	I0213 14:53:30.096931    2476 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18170-979/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-632000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-632000]
	I0213 14:53:30.282192    2476 provision.go:172] copyRemoteCerts
	I0213 14:53:30.282227    2476 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 14:53:30.282237    2476 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/ingress-addon-legacy-632000/id_rsa Username:docker}
	I0213 14:53:30.322389    2476 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18170-979/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0213 14:53:30.322440    2476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0213 14:53:30.329727    2476 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18170-979/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0213 14:53:30.329794    2476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 14:53:30.337032    2476 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0213 14:53:30.337070    2476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 14:53:30.343653    2476 provision.go:86] duration metric: configureAuth took 247.89625ms
	I0213 14:53:30.343660    2476 buildroot.go:189] setting minikube options for container-runtime
	I0213 14:53:30.343753    2476 config.go:182] Loaded profile config "ingress-addon-legacy-632000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0213 14:53:30.343790    2476 main.go:141] libmachine: Using SSH client type: native
	I0213 14:53:30.344001    2476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d478e0] 0x100d4a050 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0213 14:53:30.344006    2476 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0213 14:53:30.417894    2476 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0213 14:53:30.417903    2476 buildroot.go:70] root file system type: tmpfs
	I0213 14:53:30.417960    2476 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0213 14:53:30.418004    2476 main.go:141] libmachine: Using SSH client type: native
	I0213 14:53:30.418260    2476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d478e0] 0x100d4a050 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0213 14:53:30.418297    2476 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0213 14:53:30.499077    2476 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0213 14:53:30.499130    2476 main.go:141] libmachine: Using SSH client type: native
	I0213 14:53:30.499394    2476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d478e0] 0x100d4a050 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0213 14:53:30.499404    2476 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0213 14:53:30.867035    2476 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0213 14:53:30.867050    2476 machine.go:91] provisioned docker machine in 969.070583ms
	I0213 14:53:30.867056    2476 client.go:171] LocalClient.Create took 16.279702208s
	I0213 14:53:30.867067    2476 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-632000" took 16.279741917s
	I0213 14:53:30.867073    2476 start.go:300] post-start starting for "ingress-addon-legacy-632000" (driver="qemu2")
	I0213 14:53:30.867079    2476 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 14:53:30.867138    2476 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 14:53:30.867148    2476 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/ingress-addon-legacy-632000/id_rsa Username:docker}
	I0213 14:53:30.905544    2476 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 14:53:30.906902    2476 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 14:53:30.906908    2476 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18170-979/.minikube/addons for local assets ...
	I0213 14:53:30.906981    2476 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18170-979/.minikube/files for local assets ...
	I0213 14:53:30.907089    2476 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem -> 14072.pem in /etc/ssl/certs
	I0213 14:53:30.907094    2476 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem -> /etc/ssl/certs/14072.pem
	I0213 14:53:30.907210    2476 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 14:53:30.910322    2476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem --> /etc/ssl/certs/14072.pem (1708 bytes)
	I0213 14:53:30.917530    2476 start.go:303] post-start completed in 50.45375ms
	I0213 14:53:30.917951    2476 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/config.json ...
	I0213 14:53:30.918138    2476 start.go:128] duration metric: createHost completed in 16.351951958s
	I0213 14:53:30.918163    2476 main.go:141] libmachine: Using SSH client type: native
	I0213 14:53:30.918374    2476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d478e0] 0x100d4a050 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0213 14:53:30.918381    2476 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 14:53:30.989391    2476 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707864810.998792002
	
	I0213 14:53:30.989401    2476 fix.go:206] guest clock: 1707864810.998792002
	I0213 14:53:30.989405    2476 fix.go:219] Guest: 2024-02-13 14:53:30.998792002 -0800 PST Remote: 2024-02-13 14:53:30.918141 -0800 PST m=+39.347624418 (delta=80.651002ms)
	I0213 14:53:30.989416    2476 fix.go:190] guest clock delta is within tolerance: 80.651002ms
	I0213 14:53:30.989419    2476 start.go:83] releasing machines lock for "ingress-addon-legacy-632000", held for 16.423285875s
	I0213 14:53:30.989699    2476 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 14:53:30.989704    2476 ssh_runner.go:195] Run: cat /version.json
	I0213 14:53:30.989719    2476 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/ingress-addon-legacy-632000/id_rsa Username:docker}
	I0213 14:53:30.989723    2476 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/ingress-addon-legacy-632000/id_rsa Username:docker}
	I0213 14:53:31.028643    2476 ssh_runner.go:195] Run: systemctl --version
	I0213 14:53:31.073592    2476 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 14:53:31.075408    2476 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 14:53:31.075434    2476 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0213 14:53:31.078649    2476 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0213 14:53:31.083675    2476 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 14:53:31.083683    2476 start.go:475] detecting cgroup driver to use...
	I0213 14:53:31.083755    2476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 14:53:31.090065    2476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0213 14:53:31.093604    2476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 14:53:31.096615    2476 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 14:53:31.096638    2476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 14:53:31.099537    2476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 14:53:31.102565    2476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 14:53:31.106085    2476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 14:53:31.109754    2476 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 14:53:31.113482    2476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 14:53:31.116616    2476 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 14:53:31.119191    2476 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 14:53:31.122219    2476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 14:53:31.207107    2476 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 14:53:31.213366    2476 start.go:475] detecting cgroup driver to use...
	I0213 14:53:31.213421    2476 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 14:53:31.221858    2476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 14:53:31.227057    2476 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 14:53:31.233087    2476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 14:53:31.237748    2476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 14:53:31.242637    2476 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0213 14:53:31.280643    2476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 14:53:31.285240    2476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 14:53:31.290530    2476 ssh_runner.go:195] Run: which cri-dockerd
	I0213 14:53:31.291906    2476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 14:53:31.294647    2476 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 14:53:31.300173    2476 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 14:53:31.379015    2476 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 14:53:31.446855    2476 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 14:53:31.446921    2476 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 14:53:31.452012    2476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 14:53:31.530800    2476 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 14:53:32.692865    2476 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.162082167s)
	I0213 14:53:32.692955    2476 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 14:53:32.702761    2476 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 14:53:32.714288    2476 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	I0213 14:53:32.714363    2476 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0213 14:53:32.715701    2476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 14:53:32.719240    2476 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0213 14:53:32.719282    2476 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 14:53:32.724311    2476 docker.go:685] Got preloaded images: 
	I0213 14:53:32.724320    2476 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0213 14:53:32.724359    2476 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 14:53:32.727109    2476 ssh_runner.go:195] Run: which lz4
	I0213 14:53:32.728562    2476 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0213 14:53:32.728636    2476 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 14:53:32.729903    2476 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 14:53:32.729913    2476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0213 14:53:34.406855    2476 docker.go:649] Took 1.678293 seconds to copy over tarball
	I0213 14:53:34.406910    2476 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 14:53:35.697159    2476 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.290272083s)
	I0213 14:53:35.697174    2476 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 14:53:35.720050    2476 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 14:53:35.723637    2476 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0213 14:53:35.728619    2476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 14:53:35.806597    2476 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 14:53:37.299940    2476 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.49337s)
	I0213 14:53:37.300019    2476 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 14:53:37.305962    2476 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0213 14:53:37.305969    2476 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0213 14:53:37.305974    2476 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 14:53:37.314159    2476 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0213 14:53:37.314173    2476 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0213 14:53:37.314309    2476 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0213 14:53:37.314356    2476 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 14:53:37.314504    2476 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 14:53:37.314807    2476 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0213 14:53:37.314858    2476 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0213 14:53:37.314964    2476 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0213 14:53:37.324119    2476 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0213 14:53:37.324226    2476 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0213 14:53:37.325073    2476 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0213 14:53:37.325137    2476 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0213 14:53:37.325168    2476 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0213 14:53:37.325200    2476 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 14:53:37.325213    2476 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 14:53:37.325207    2476 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	W0213 14:53:39.316963    2476 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0213 14:53:39.317535    2476 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0213 14:53:39.340361    2476 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0213 14:53:39.340441    2476 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0213 14:53:39.340576    2476 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	W0213 14:53:39.353845    2476 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0213 14:53:39.354067    2476 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0213 14:53:39.356326    2476 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0213 14:53:39.366422    2476 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0213 14:53:39.366457    2476 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0213 14:53:39.366536    2476 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0213 14:53:39.375852    2476 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	W0213 14:53:39.392059    2476 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0213 14:53:39.392261    2476 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0213 14:53:39.400854    2476 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0213 14:53:39.401136    2476 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0213 14:53:39.401156    2476 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0213 14:53:39.401193    2476 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	W0213 14:53:39.407659    2476 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0213 14:53:39.407788    2476 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 14:53:39.417504    2476 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0213 14:53:39.417515    2476 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0213 14:53:39.417537    2476 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I0213 14:53:39.417557    2476 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0213 14:53:39.417568    2476 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 14:53:39.417586    2476 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0213 14:53:39.417592    2476 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 14:53:39.428342    2476 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0213 14:53:39.428375    2476 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W0213 14:53:39.446299    2476 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0213 14:53:39.446408    2476 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W0213 14:53:39.451075    2476 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0213 14:53:39.451168    2476 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0213 14:53:39.453141    2476 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0213 14:53:39.453157    2476 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0213 14:53:39.453185    2476 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0213 14:53:39.458442    2476 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0213 14:53:39.458461    2476 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
	I0213 14:53:39.458507    2476 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0213 14:53:39.458928    2476 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0213 14:53:39.464260    2476 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	W0213 14:53:40.394918    2476 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0213 14:53:40.395419    2476 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 14:53:40.420195    2476 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0213 14:53:40.420251    2476 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 14:53:40.420391    2476 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 14:53:40.445913    2476 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0213 14:53:40.445998    2476 cache_images.go:92] LoadImages completed in 3.1400955s
	W0213 14:53:40.446058    2476 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	I0213 14:53:40.446152    2476 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 14:53:40.461065    2476 cni.go:84] Creating CNI manager for ""
	I0213 14:53:40.461084    2476 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 14:53:40.461094    2476 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 14:53:40.461108    2476 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-632000 NodeName:ingress-addon-legacy-632000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0213 14:53:40.461211    2476 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-632000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 14:53:40.461271    2476 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-632000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-632000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 14:53:40.461350    2476 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0213 14:53:40.466364    2476 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 14:53:40.466405    2476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 14:53:40.470301    2476 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0213 14:53:40.477122    2476 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0213 14:53:40.482942    2476 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I0213 14:53:40.488344    2476 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I0213 14:53:40.489615    2476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 14:53:40.493411    2476 certs.go:56] Setting up /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000 for IP: 192.168.105.6
	I0213 14:53:40.493423    2476 certs.go:190] acquiring lock for shared ca certs: {Name:mk65e421691b8fb2c09fb65e08f20f9a769da9f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:53:40.493573    2476 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18170-979/.minikube/ca.key
	I0213 14:53:40.493619    2476 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.key
	I0213 14:53:40.493645    2476 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.key
	I0213 14:53:40.493654    2476 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt with IP's: []
	I0213 14:53:40.557066    2476 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt ...
	I0213 14:53:40.557070    2476 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt: {Name:mk1e60451b75d607848278b73c6e177f9c30fc5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:53:40.557298    2476 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.key ...
	I0213 14:53:40.557303    2476 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.key: {Name:mk792d5ffae7d538511a79d6a08e59eed22bd9de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:53:40.557431    2476 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/apiserver.key.b354f644
	I0213 14:53:40.557439    2476 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I0213 14:53:40.637442    2476 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/apiserver.crt.b354f644 ...
	I0213 14:53:40.637445    2476 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/apiserver.crt.b354f644: {Name:mke3a4cf03d7275f72dce8e3ed2db5c4009efa2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:53:40.637573    2476 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/apiserver.key.b354f644 ...
	I0213 14:53:40.637576    2476 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/apiserver.key.b354f644: {Name:mkaa1cd6d78b39acee47799220e609540fb40fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:53:40.637684    2476 certs.go:337] copying /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/apiserver.crt
	I0213 14:53:40.637861    2476 certs.go:341] copying /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/apiserver.key
	I0213 14:53:40.638002    2476 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/proxy-client.key
	I0213 14:53:40.638010    2476 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/proxy-client.crt with IP's: []
	I0213 14:53:40.822478    2476 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/proxy-client.crt ...
	I0213 14:53:40.822484    2476 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/proxy-client.crt: {Name:mka02eef6b931ba8888c87cea3aee7431bd663a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:53:40.822688    2476 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/proxy-client.key ...
	I0213 14:53:40.822692    2476 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/proxy-client.key: {Name:mke159c9b9e9b88fb18e19fcbf32f13f62734156 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:53:40.822826    2476 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0213 14:53:40.822841    2476 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0213 14:53:40.822851    2476 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0213 14:53:40.822861    2476 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0213 14:53:40.822872    2476 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0213 14:53:40.822882    2476 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18170-979/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0213 14:53:40.822892    2476 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0213 14:53:40.822902    2476 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0213 14:53:40.822984    2476 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/1407.pem (1338 bytes)
	W0213 14:53:40.823017    2476 certs.go:433] ignoring /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/1407_empty.pem, impossibly tiny 0 bytes
	I0213 14:53:40.823036    2476 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 14:53:40.823061    2476 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem (1078 bytes)
	I0213 14:53:40.823090    2476 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem (1123 bytes)
	I0213 14:53:40.823110    2476 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/key.pem (1675 bytes)
	I0213 14:53:40.823163    2476 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem (1708 bytes)
	I0213 14:53:40.823188    2476 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem -> /usr/share/ca-certificates/14072.pem
	I0213 14:53:40.823199    2476 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0213 14:53:40.823208    2476 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/1407.pem -> /usr/share/ca-certificates/1407.pem
	I0213 14:53:40.823621    2476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 14:53:40.831308    2476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 14:53:40.838221    2476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 14:53:40.845506    2476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 14:53:40.852547    2476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 14:53:40.859274    2476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 14:53:40.866051    2476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 14:53:40.873344    2476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0213 14:53:40.880312    2476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem --> /usr/share/ca-certificates/14072.pem (1708 bytes)
	I0213 14:53:40.886946    2476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 14:53:40.894026    2476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/certs/1407.pem --> /usr/share/ca-certificates/1407.pem (1338 bytes)
	I0213 14:53:40.901317    2476 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 14:53:40.906661    2476 ssh_runner.go:195] Run: openssl version
	I0213 14:53:40.908814    2476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 14:53:40.911846    2476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 14:53:40.913220    2476 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 22:40 /usr/share/ca-certificates/minikubeCA.pem
	I0213 14:53:40.913246    2476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 14:53:40.915126    2476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 14:53:40.918528    2476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1407.pem && ln -fs /usr/share/ca-certificates/1407.pem /etc/ssl/certs/1407.pem"
	I0213 14:53:40.922025    2476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1407.pem
	I0213 14:53:40.923614    2476 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:48 /usr/share/ca-certificates/1407.pem
	I0213 14:53:40.923638    2476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1407.pem
	I0213 14:53:40.925392    2476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1407.pem /etc/ssl/certs/51391683.0"
	I0213 14:53:40.928653    2476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14072.pem && ln -fs /usr/share/ca-certificates/14072.pem /etc/ssl/certs/14072.pem"
	I0213 14:53:40.931833    2476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14072.pem
	I0213 14:53:40.933398    2476 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:48 /usr/share/ca-certificates/14072.pem
	I0213 14:53:40.933419    2476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14072.pem
	I0213 14:53:40.935564    2476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14072.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 14:53:40.938750    2476 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 14:53:40.940200    2476 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0213 14:53:40.940228    2476 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.18.20 ClusterName:ingress-addon-legacy-632000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 14:53:40.940298    2476 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 14:53:40.945826    2476 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 14:53:40.949028    2476 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 14:53:40.951711    2476 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 14:53:40.954491    2476 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 14:53:40.954508    2476 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0213 14:53:40.983274    2476 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0213 14:53:40.983301    2476 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 14:53:41.067409    2476 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 14:53:41.067476    2476 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 14:53:41.067528    2476 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 14:53:41.113484    2476 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 14:53:41.113973    2476 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 14:53:41.114064    2476 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 14:53:41.200211    2476 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 14:53:41.208397    2476 out.go:204]   - Generating certificates and keys ...
	I0213 14:53:41.208432    2476 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 14:53:41.208466    2476 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 14:53:41.436336    2476 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0213 14:53:41.571497    2476 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0213 14:53:41.653921    2476 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0213 14:53:41.789452    2476 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0213 14:53:41.837869    2476 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0213 14:53:41.837938    2476 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-632000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0213 14:53:41.924048    2476 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0213 14:53:41.924130    2476 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-632000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0213 14:53:42.064188    2476 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0213 14:53:42.144300    2476 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0213 14:53:42.272259    2476 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0213 14:53:42.272287    2476 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 14:53:42.380663    2476 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 14:53:42.543886    2476 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 14:53:42.575003    2476 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 14:53:42.621422    2476 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 14:53:42.621871    2476 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 14:53:42.626130    2476 out.go:204]   - Booting up control plane ...
	I0213 14:53:42.626181    2476 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 14:53:42.626231    2476 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 14:53:42.626274    2476 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 14:53:42.626470    2476 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 14:53:42.627725    2476 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 14:53:54.629055    2476 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.001378 seconds
	I0213 14:53:54.629170    2476 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 14:53:54.637572    2476 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 14:53:55.164375    2476 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 14:53:55.164602    2476 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-632000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0213 14:53:55.672294    2476 kubeadm.go:322] [bootstrap-token] Using token: 97od6g.68vdk5s0euvock6l
	I0213 14:53:55.676576    2476 out.go:204]   - Configuring RBAC rules ...
	I0213 14:53:55.676723    2476 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 14:53:55.682734    2476 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 14:53:55.690883    2476 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 14:53:55.692813    2476 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 14:53:55.694735    2476 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 14:53:55.696285    2476 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 14:53:55.702245    2476 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 14:53:55.924293    2476 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 14:53:56.084018    2476 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 14:53:56.084595    2476 kubeadm.go:322] 
	I0213 14:53:56.084631    2476 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 14:53:56.084635    2476 kubeadm.go:322] 
	I0213 14:53:56.084688    2476 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 14:53:56.084694    2476 kubeadm.go:322] 
	I0213 14:53:56.084716    2476 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 14:53:56.084752    2476 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 14:53:56.084798    2476 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 14:53:56.084806    2476 kubeadm.go:322] 
	I0213 14:53:56.084842    2476 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 14:53:56.084901    2476 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 14:53:56.084944    2476 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 14:53:56.084948    2476 kubeadm.go:322] 
	I0213 14:53:56.084996    2476 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 14:53:56.085046    2476 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 14:53:56.085052    2476 kubeadm.go:322] 
	I0213 14:53:56.085102    2476 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 97od6g.68vdk5s0euvock6l \
	I0213 14:53:56.085174    2476 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:59d33d1eb40ab9fa290fa08077c20062eca217e8d74d3cd8b9b4fd2d5d6aeb8d \
	I0213 14:53:56.085192    2476 kubeadm.go:322]     --control-plane 
	I0213 14:53:56.085197    2476 kubeadm.go:322] 
	I0213 14:53:56.085248    2476 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 14:53:56.085252    2476 kubeadm.go:322] 
	I0213 14:53:56.085298    2476 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 97od6g.68vdk5s0euvock6l \
	I0213 14:53:56.085371    2476 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:59d33d1eb40ab9fa290fa08077c20062eca217e8d74d3cd8b9b4fd2d5d6aeb8d 
	I0213 14:53:56.085544    2476 kubeadm.go:322] W0213 22:53:40.992921    1418 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0213 14:53:56.085661    2476 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0213 14:53:56.085738    2476 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I0213 14:53:56.085805    2476 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 14:53:56.085881    2476 kubeadm.go:322] W0213 22:53:42.635545    1418 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0213 14:53:56.085959    2476 kubeadm.go:322] W0213 22:53:42.635979    1418 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0213 14:53:56.085965    2476 cni.go:84] Creating CNI manager for ""
	I0213 14:53:56.085972    2476 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 14:53:56.085989    2476 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 14:53:56.086059    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:53:56.086070    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=fb52fe04bc8b044b129ef2ff27607d20a9fceb93 minikube.k8s.io/name=ingress-addon-legacy-632000 minikube.k8s.io/updated_at=2024_02_13T14_53_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:53:56.150314    2476 ops.go:34] apiserver oom_adj: -16
	I0213 14:53:56.150365    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:53:56.652664    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:53:57.152743    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:53:57.652796    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:53:58.152739    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:53:58.652633    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:53:59.152662    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:53:59.654339    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:00.152617    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:00.652589    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:01.152471    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:01.652342    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:02.152533    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:02.652243    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:03.152550    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:03.652625    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:04.152586    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:04.652324    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:05.152465    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:05.652458    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:06.152367    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:06.651329    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:07.152419    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:07.652487    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:08.152350    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:08.652397    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:09.152389    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:09.652337    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:10.152341    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:10.652319    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:11.152285    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:11.652254    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:12.152177    2476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 14:54:12.252994    2476 kubeadm.go:1088] duration metric: took 16.167486458s to wait for elevateKubeSystemPrivileges.
	I0213 14:54:12.253011    2476 kubeadm.go:406] StartCluster complete in 31.313738458s
	I0213 14:54:12.253021    2476 settings.go:142] acquiring lock: {Name:mkdd6397441cfaf6d06a74b65d6ddefdb863237c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:54:12.253099    2476 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 14:54:12.253456    2476 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/kubeconfig: {Name:mkf66d96abab1e512e6f2721c341e70e5b11c9ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:54:12.253667    2476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 14:54:12.253690    2476 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 14:54:12.253729    2476 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-632000"
	I0213 14:54:12.253733    2476 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-632000"
	I0213 14:54:12.253737    2476 addons.go:234] Setting addon storage-provisioner=true in "ingress-addon-legacy-632000"
	I0213 14:54:12.253741    2476 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-632000"
	I0213 14:54:12.253761    2476 host.go:66] Checking if "ingress-addon-legacy-632000" exists ...
	I0213 14:54:12.253901    2476 kapi.go:59] client config for ingress-addon-legacy-632000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.key", CAFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102023f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 14:54:12.253944    2476 config.go:182] Loaded profile config "ingress-addon-legacy-632000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0213 14:54:12.254267    2476 cert_rotation.go:137] Starting client certificate rotation controller
	I0213 14:54:12.254847    2476 kapi.go:59] client config for ingress-addon-legacy-632000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.key", CAFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102023f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 14:54:12.254953    2476 addons.go:234] Setting addon default-storageclass=true in "ingress-addon-legacy-632000"
	I0213 14:54:12.254964    2476 host.go:66] Checking if "ingress-addon-legacy-632000" exists ...
	I0213 14:54:12.258295    2476 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 14:54:12.261366    2476 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 14:54:12.261372    2476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 14:54:12.261379    2476 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/ingress-addon-legacy-632000/id_rsa Username:docker}
	I0213 14:54:12.262127    2476 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 14:54:12.262132    2476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 14:54:12.262135    2476 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/ingress-addon-legacy-632000/id_rsa Username:docker}
	I0213 14:54:12.309864    2476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 14:54:12.313786    2476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 14:54:12.364073    2476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 14:54:12.583998    2476 start.go:929] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0213 14:54:12.591190    2476 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0213 14:54:12.599084    2476 addons.go:505] enable addons completed in 345.406542ms: enabled=[storage-provisioner default-storageclass]
	I0213 14:54:12.757877    2476 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-632000" context rescaled to 1 replicas
	I0213 14:54:12.757897    2476 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 14:54:12.762052    2476 out.go:177] * Verifying Kubernetes components...
	I0213 14:54:12.770997    2476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 14:54:12.796016    2476 kapi.go:59] client config for ingress-addon-legacy-632000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.key", CAFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102023f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 14:54:12.796150    2476 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-632000" to be "Ready" ...
	I0213 14:54:12.798797    2476 node_ready.go:49] node "ingress-addon-legacy-632000" has status "Ready":"True"
	I0213 14:54:12.798804    2476 node_ready.go:38] duration metric: took 2.64075ms waiting for node "ingress-addon-legacy-632000" to be "Ready" ...
	I0213 14:54:12.798808    2476 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 14:54:12.802609    2476 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-lm8qq" in "kube-system" namespace to be "Ready" ...
	I0213 14:54:14.813831    2476 pod_ready.go:102] pod "coredns-66bff467f8-lm8qq" in "kube-system" namespace has status "Ready":"False"
	I0213 14:54:16.815376    2476 pod_ready.go:102] pod "coredns-66bff467f8-lm8qq" in "kube-system" namespace has status "Ready":"False"
	I0213 14:54:19.307963    2476 pod_ready.go:102] pod "coredns-66bff467f8-lm8qq" in "kube-system" namespace has status "Ready":"False"
	I0213 14:54:21.315475    2476 pod_ready.go:102] pod "coredns-66bff467f8-lm8qq" in "kube-system" namespace has status "Ready":"False"
	I0213 14:54:23.317768    2476 pod_ready.go:102] pod "coredns-66bff467f8-lm8qq" in "kube-system" namespace has status "Ready":"False"
	I0213 14:54:25.817127    2476 pod_ready.go:102] pod "coredns-66bff467f8-lm8qq" in "kube-system" namespace has status "Ready":"False"
	I0213 14:54:28.308920    2476 pod_ready.go:102] pod "coredns-66bff467f8-lm8qq" in "kube-system" namespace has status "Ready":"False"
	I0213 14:54:30.317487    2476 pod_ready.go:102] pod "coredns-66bff467f8-lm8qq" in "kube-system" namespace has status "Ready":"False"
	I0213 14:54:32.817542    2476 pod_ready.go:102] pod "coredns-66bff467f8-lm8qq" in "kube-system" namespace has status "Ready":"False"
	I0213 14:54:35.315800    2476 pod_ready.go:102] pod "coredns-66bff467f8-lm8qq" in "kube-system" namespace has status "Ready":"False"
	I0213 14:54:37.816865    2476 pod_ready.go:102] pod "coredns-66bff467f8-lm8qq" in "kube-system" namespace has status "Ready":"False"
	I0213 14:54:40.316205    2476 pod_ready.go:102] pod "coredns-66bff467f8-lm8qq" in "kube-system" namespace has status "Ready":"False"
	I0213 14:54:42.809428    2476 pod_ready.go:102] pod "coredns-66bff467f8-lm8qq" in "kube-system" namespace has status "Ready":"False"
	I0213 14:54:44.814944    2476 pod_ready.go:102] pod "coredns-66bff467f8-lm8qq" in "kube-system" namespace has status "Ready":"False"
	I0213 14:54:45.817587    2476 pod_ready.go:92] pod "coredns-66bff467f8-lm8qq" in "kube-system" namespace has status "Ready":"True"
	I0213 14:54:45.817629    2476 pod_ready.go:81] duration metric: took 33.016014291s waiting for pod "coredns-66bff467f8-lm8qq" in "kube-system" namespace to be "Ready" ...
	I0213 14:54:45.817652    2476 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-632000" in "kube-system" namespace to be "Ready" ...
	I0213 14:54:45.825992    2476 pod_ready.go:92] pod "etcd-ingress-addon-legacy-632000" in "kube-system" namespace has status "Ready":"True"
	I0213 14:54:45.826013    2476 pod_ready.go:81] duration metric: took 8.35075ms waiting for pod "etcd-ingress-addon-legacy-632000" in "kube-system" namespace to be "Ready" ...
	I0213 14:54:45.826027    2476 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-632000" in "kube-system" namespace to be "Ready" ...
	I0213 14:54:45.831856    2476 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-632000" in "kube-system" namespace has status "Ready":"True"
	I0213 14:54:45.831873    2476 pod_ready.go:81] duration metric: took 5.83475ms waiting for pod "kube-apiserver-ingress-addon-legacy-632000" in "kube-system" namespace to be "Ready" ...
	I0213 14:54:45.831885    2476 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-632000" in "kube-system" namespace to be "Ready" ...
	I0213 14:54:45.838292    2476 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-632000" in "kube-system" namespace has status "Ready":"True"
	I0213 14:54:45.838310    2476 pod_ready.go:81] duration metric: took 6.417125ms waiting for pod "kube-controller-manager-ingress-addon-legacy-632000" in "kube-system" namespace to be "Ready" ...
	I0213 14:54:45.838322    2476 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wzt7r" in "kube-system" namespace to be "Ready" ...
	I0213 14:54:45.843564    2476 pod_ready.go:92] pod "kube-proxy-wzt7r" in "kube-system" namespace has status "Ready":"True"
	I0213 14:54:45.843578    2476 pod_ready.go:81] duration metric: took 5.248958ms waiting for pod "kube-proxy-wzt7r" in "kube-system" namespace to be "Ready" ...
	I0213 14:54:45.843587    2476 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-632000" in "kube-system" namespace to be "Ready" ...
	I0213 14:54:46.006033    2476 request.go:629] Waited for 162.30525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-632000
	I0213 14:54:46.205692    2476 request.go:629] Waited for 193.191417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-632000
	I0213 14:54:46.214853    2476 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-632000" in "kube-system" namespace has status "Ready":"True"
	I0213 14:54:46.214879    2476 pod_ready.go:81] duration metric: took 371.293041ms waiting for pod "kube-scheduler-ingress-addon-legacy-632000" in "kube-system" namespace to be "Ready" ...
	I0213 14:54:46.214900    2476 pod_ready.go:38] duration metric: took 33.417102333s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 14:54:46.214949    2476 api_server.go:52] waiting for apiserver process to appear ...
	I0213 14:54:46.215219    2476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 14:54:46.232286    2476 api_server.go:72] duration metric: took 33.475387833s to wait for apiserver process to appear ...
	I0213 14:54:46.232300    2476 api_server.go:88] waiting for apiserver healthz status ...
	I0213 14:54:46.232325    2476 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I0213 14:54:46.240785    2476 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I0213 14:54:46.241986    2476 api_server.go:141] control plane version: v1.18.20
	I0213 14:54:46.242001    2476 api_server.go:131] duration metric: took 9.695084ms to wait for apiserver health ...
	I0213 14:54:46.242007    2476 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 14:54:46.405953    2476 request.go:629] Waited for 163.873125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0213 14:54:46.421245    2476 system_pods.go:59] 7 kube-system pods found
	I0213 14:54:46.421300    2476 system_pods.go:61] "coredns-66bff467f8-lm8qq" [a2aed4bf-e243-4d2c-b1de-279e5011ef4c] Running
	I0213 14:54:46.421311    2476 system_pods.go:61] "etcd-ingress-addon-legacy-632000" [246bcf33-8af0-451f-ab6d-b9c667a04901] Running
	I0213 14:54:46.421325    2476 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-632000" [a7b4183f-d37c-49bd-8446-1de3d5f76ff5] Running
	I0213 14:54:46.421335    2476 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-632000" [52f6fb81-51d5-4572-8a49-398feed5a538] Running
	I0213 14:54:46.421347    2476 system_pods.go:61] "kube-proxy-wzt7r" [3c78b0cd-4154-40b5-ac3b-4900da1eba5a] Running
	I0213 14:54:46.421365    2476 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-632000" [52f0535e-1490-4e4e-876b-9b33ffa4fa8f] Running
	I0213 14:54:46.421376    2476 system_pods.go:61] "storage-provisioner" [c58ea144-1cb3-4d04-9f4e-48c8ccd7f305] Running
	I0213 14:54:46.421388    2476 system_pods.go:74] duration metric: took 179.37875ms to wait for pod list to return data ...
	I0213 14:54:46.421403    2476 default_sa.go:34] waiting for default service account to be created ...
	I0213 14:54:46.605913    2476 request.go:629] Waited for 184.402958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I0213 14:54:46.612607    2476 default_sa.go:45] found service account: "default"
	I0213 14:54:46.612637    2476 default_sa.go:55] duration metric: took 191.230167ms for default service account to be created ...
	I0213 14:54:46.612652    2476 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 14:54:46.805921    2476 request.go:629] Waited for 193.173292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0213 14:54:46.820349    2476 system_pods.go:86] 7 kube-system pods found
	I0213 14:54:46.820399    2476 system_pods.go:89] "coredns-66bff467f8-lm8qq" [a2aed4bf-e243-4d2c-b1de-279e5011ef4c] Running
	I0213 14:54:46.820412    2476 system_pods.go:89] "etcd-ingress-addon-legacy-632000" [246bcf33-8af0-451f-ab6d-b9c667a04901] Running
	I0213 14:54:46.820423    2476 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-632000" [a7b4183f-d37c-49bd-8446-1de3d5f76ff5] Running
	I0213 14:54:46.820433    2476 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-632000" [52f6fb81-51d5-4572-8a49-398feed5a538] Running
	I0213 14:54:46.820448    2476 system_pods.go:89] "kube-proxy-wzt7r" [3c78b0cd-4154-40b5-ac3b-4900da1eba5a] Running
	I0213 14:54:46.820461    2476 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-632000" [52f0535e-1490-4e4e-876b-9b33ffa4fa8f] Running
	I0213 14:54:46.820472    2476 system_pods.go:89] "storage-provisioner" [c58ea144-1cb3-4d04-9f4e-48c8ccd7f305] Running
	I0213 14:54:46.820485    2476 system_pods.go:126] duration metric: took 207.8305ms to wait for k8s-apps to be running ...
	I0213 14:54:46.820504    2476 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 14:54:46.820710    2476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 14:54:46.837431    2476 system_svc.go:56] duration metric: took 16.923375ms WaitForService to wait for kubelet.
	I0213 14:54:46.837457    2476 kubeadm.go:581] duration metric: took 34.080578958s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 14:54:46.837482    2476 node_conditions.go:102] verifying NodePressure condition ...
	I0213 14:54:47.005951    2476 request.go:629] Waited for 168.369083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I0213 14:54:47.013746    2476 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0213 14:54:47.013801    2476 node_conditions.go:123] node cpu capacity is 2
	I0213 14:54:47.013851    2476 node_conditions.go:105] duration metric: took 176.360083ms to run NodePressure ...
	I0213 14:54:47.013881    2476 start.go:228] waiting for startup goroutines ...
	I0213 14:54:47.013901    2476 start.go:233] waiting for cluster config update ...
	I0213 14:54:47.013944    2476 start.go:242] writing updated cluster config ...
	I0213 14:54:47.015395    2476 ssh_runner.go:195] Run: rm -f paused
	I0213 14:54:47.082515    2476 start.go:600] kubectl: 1.29.1, cluster: 1.18.20 (minor skew: 11)
	I0213 14:54:47.086292    2476 out.go:177] 
	W0213 14:54:47.091321    2476 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.18.20.
	I0213 14:54:47.096290    2476 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0213 14:54:47.104281    2476 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-632000" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-02-13 22:53:27 UTC, ends at Tue 2024-02-13 22:55:59 UTC. --
	Feb 13 22:55:38 ingress-addon-legacy-632000 dockerd[1077]: time="2024-02-13T22:55:38.468315279Z" level=info msg="shim disconnected" id=0ad8311e8a67c86dba09155c404fa34f6ee3dfc7c8be35c02538908a0ecc9597 namespace=moby
	Feb 13 22:55:38 ingress-addon-legacy-632000 dockerd[1077]: time="2024-02-13T22:55:38.468344569Z" level=warning msg="cleaning up after shim disconnected" id=0ad8311e8a67c86dba09155c404fa34f6ee3dfc7c8be35c02538908a0ecc9597 namespace=moby
	Feb 13 22:55:38 ingress-addon-legacy-632000 dockerd[1077]: time="2024-02-13T22:55:38.468348693Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 13 22:55:42 ingress-addon-legacy-632000 dockerd[1071]: time="2024-02-13T22:55:42.382565264Z" level=info msg="ignoring event" container=9169c06c141bd3e842ecabc7246090713862451d7ab7bac5f49479e991862f3c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 13 22:55:42 ingress-addon-legacy-632000 dockerd[1077]: time="2024-02-13T22:55:42.382868123Z" level=info msg="shim disconnected" id=9169c06c141bd3e842ecabc7246090713862451d7ab7bac5f49479e991862f3c namespace=moby
	Feb 13 22:55:42 ingress-addon-legacy-632000 dockerd[1077]: time="2024-02-13T22:55:42.382919745Z" level=warning msg="cleaning up after shim disconnected" id=9169c06c141bd3e842ecabc7246090713862451d7ab7bac5f49479e991862f3c namespace=moby
	Feb 13 22:55:42 ingress-addon-legacy-632000 dockerd[1077]: time="2024-02-13T22:55:42.382926120Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 13 22:55:52 ingress-addon-legacy-632000 dockerd[1077]: time="2024-02-13T22:55:52.401832191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 13 22:55:52 ingress-addon-legacy-632000 dockerd[1077]: time="2024-02-13T22:55:52.401895272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 13 22:55:52 ingress-addon-legacy-632000 dockerd[1077]: time="2024-02-13T22:55:52.401905688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 13 22:55:52 ingress-addon-legacy-632000 dockerd[1077]: time="2024-02-13T22:55:52.401912729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 13 22:55:52 ingress-addon-legacy-632000 dockerd[1077]: time="2024-02-13T22:55:52.449476562Z" level=info msg="shim disconnected" id=44733c84a0748d4396bd34e3622d1e76a26db416c40f17f2515872cd6430b7b3 namespace=moby
	Feb 13 22:55:52 ingress-addon-legacy-632000 dockerd[1077]: time="2024-02-13T22:55:52.449572558Z" level=warning msg="cleaning up after shim disconnected" id=44733c84a0748d4396bd34e3622d1e76a26db416c40f17f2515872cd6430b7b3 namespace=moby
	Feb 13 22:55:52 ingress-addon-legacy-632000 dockerd[1077]: time="2024-02-13T22:55:52.449590558Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 13 22:55:52 ingress-addon-legacy-632000 dockerd[1071]: time="2024-02-13T22:55:52.449705345Z" level=info msg="ignoring event" container=44733c84a0748d4396bd34e3622d1e76a26db416c40f17f2515872cd6430b7b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 13 22:55:54 ingress-addon-legacy-632000 dockerd[1071]: time="2024-02-13T22:55:54.874405006Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=2fd1ddcf85afcd554b869f3f1b75c61c4f681ec173976b67e9693dde72acebff
	Feb 13 22:55:54 ingress-addon-legacy-632000 dockerd[1071]: time="2024-02-13T22:55:54.883752168Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=2fd1ddcf85afcd554b869f3f1b75c61c4f681ec173976b67e9693dde72acebff
	Feb 13 22:55:54 ingress-addon-legacy-632000 dockerd[1071]: time="2024-02-13T22:55:54.982552631Z" level=info msg="ignoring event" container=2fd1ddcf85afcd554b869f3f1b75c61c4f681ec173976b67e9693dde72acebff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 13 22:55:54 ingress-addon-legacy-632000 dockerd[1077]: time="2024-02-13T22:55:54.983226648Z" level=info msg="shim disconnected" id=2fd1ddcf85afcd554b869f3f1b75c61c4f681ec173976b67e9693dde72acebff namespace=moby
	Feb 13 22:55:54 ingress-addon-legacy-632000 dockerd[1077]: time="2024-02-13T22:55:54.983490097Z" level=warning msg="cleaning up after shim disconnected" id=2fd1ddcf85afcd554b869f3f1b75c61c4f681ec173976b67e9693dde72acebff namespace=moby
	Feb 13 22:55:54 ingress-addon-legacy-632000 dockerd[1077]: time="2024-02-13T22:55:54.983534178Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 13 22:55:55 ingress-addon-legacy-632000 dockerd[1071]: time="2024-02-13T22:55:55.017697746Z" level=info msg="ignoring event" container=06b881ff4e2685f7cd5c50e3614584cc4a691e6b7353e4c66df2d439d7501cbe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 13 22:55:55 ingress-addon-legacy-632000 dockerd[1077]: time="2024-02-13T22:55:55.017857658Z" level=info msg="shim disconnected" id=06b881ff4e2685f7cd5c50e3614584cc4a691e6b7353e4c66df2d439d7501cbe namespace=moby
	Feb 13 22:55:55 ingress-addon-legacy-632000 dockerd[1077]: time="2024-02-13T22:55:55.018032735Z" level=warning msg="cleaning up after shim disconnected" id=06b881ff4e2685f7cd5c50e3614584cc4a691e6b7353e4c66df2d439d7501cbe namespace=moby
	Feb 13 22:55:55 ingress-addon-legacy-632000 dockerd[1077]: time="2024-02-13T22:55:55.018058317Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER ID   IMAGE                                      COMMAND                  CREATED              STATUS                          PORTS     NAMES
	44733c84a074   dd1b12fcb609                               "/hello-app"             7 seconds ago        Exited (1) 7 seconds ago                  k8s_hello-world-app_hello-world-app-5f5d8b66bb-nvqwt_default_dd9133f6-38c8-4be6-9b9f-06500b09e5bc_2
	02a8cdf3baa1   k8s.gcr.io/pause:3.2                       "/pause"                 34 seconds ago       Up 33 seconds                             k8s_POD_hello-world-app-5f5d8b66bb-nvqwt_default_dd9133f6-38c8-4be6-9b9f-06500b09e5bc_0
	51a2a146f60d   nginx                                      "/docker-entrypoint.…"   40 seconds ago       Up 40 seconds                             k8s_nginx_nginx_default_16401e57-bd05-4f30-b23b-eb30cd6a0f17_0
	4589cfde733c   k8s.gcr.io/pause:3.2                       "/pause"                 43 seconds ago       Up 42 seconds                             k8s_POD_nginx_default_16401e57-bd05-4f30-b23b-eb30cd6a0f17_0
	9169c06c141b   k8s.gcr.io/pause:3.2                       "/pause"                 56 seconds ago       Exited (0) 17 seconds ago                 k8s_POD_kube-ingress-dns-minikube_kube-system_4149ee13-5f9f-4ee0-91bb-19b876455e3e_0
	2fd1ddcf85af   registry.k8s.io/ingress-nginx/controller   "/usr/bin/dumb-init …"   58 seconds ago       Exited (137) 4 seconds ago                k8s_controller_ingress-nginx-controller-7fcf777cb7-qb9n6_ingress-nginx_c33f3e5f-2d57-4ab6-9ca4-65bef5793a16_0
	06b881ff4e26   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Exited (0) 4 seconds ago                  k8s_POD_ingress-nginx-controller-7fcf777cb7-qb9n6_ingress-nginx_c33f3e5f-2d57-4ab6-9ca4-65bef5793a16_0
	5072715dead2   jettech/kube-webhook-certgen               "/kube-webhook-certg…"   About a minute ago   Exited (0) About a minute ago             k8s_patch_ingress-nginx-admission-patch-g7nmc_ingress-nginx_f7e748c2-2dec-4b0f-aa5c-5cc197e4d34f_0
	fb3b2cb7a0a5   jettech/kube-webhook-certgen               "/kube-webhook-certg…"   About a minute ago   Exited (0) About a minute ago             k8s_create_ingress-nginx-admission-create-thhw6_ingress-nginx_075e8aa1-6aa2-4fe0-bf99-27f2c5c13b2b_0
	5db6282f860d   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Exited (0) About a minute ago             k8s_POD_ingress-nginx-admission-create-thhw6_ingress-nginx_075e8aa1-6aa2-4fe0-bf99-27f2c5c13b2b_0
	ee55c966375f   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Exited (0) About a minute ago             k8s_POD_ingress-nginx-admission-patch-g7nmc_ingress-nginx_f7e748c2-2dec-4b0f-aa5c-5cc197e4d34f_0
	c4876097e181   gcr.io/k8s-minikube/storage-provisioner    "/storage-provisioner"   About a minute ago   Up About a minute                         k8s_storage-provisioner_storage-provisioner_kube-system_c58ea144-1cb3-4d04-9f4e-48c8ccd7f305_0
	dbe24f6da640   6e17ba78cf3e                               "/coredns -conf /etc…"   About a minute ago   Up About a minute                         k8s_coredns_coredns-66bff467f8-lm8qq_kube-system_a2aed4bf-e243-4d2c-b1de-279e5011ef4c_0
	f7da9613259f   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_storage-provisioner_kube-system_c58ea144-1cb3-4d04-9f4e-48c8ccd7f305_0
	23570a6fb254   565297bc6f7d                               "/usr/local/bin/kube…"   About a minute ago   Up About a minute                         k8s_kube-proxy_kube-proxy-wzt7r_kube-system_3c78b0cd-4154-40b5-ac3b-4900da1eba5a_0
	55f0bdd80dd3   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-proxy-wzt7r_kube-system_3c78b0cd-4154-40b5-ac3b-4900da1eba5a_0
	1ea97fc77375   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_coredns-66bff467f8-lm8qq_kube-system_a2aed4bf-e243-4d2c-b1de-279e5011ef4c_0
	48d0c0a44184   2694cf044d66                               "kube-apiserver --ad…"   2 minutes ago        Up 2 minutes                              k8s_kube-apiserver_kube-apiserver-ingress-addon-legacy-632000_kube-system_36bf945afdf7c8fc8d73074b2bf4e3c3_0
	4fa5686032f4   ab707b0a0ea3                               "etcd --advertise-cl…"   2 minutes ago        Up 2 minutes                              k8s_etcd_etcd-ingress-addon-legacy-632000_kube-system_67990b7e3aec8d2152506f826f8ae958_0
	e63a5c2d4fbe   68a4fac29a86                               "kube-controller-man…"   2 minutes ago        Up 2 minutes                              k8s_kube-controller-manager_kube-controller-manager-ingress-addon-legacy-632000_kube-system_b395a1e17534e69e27827b1f8d737725_0
	41441f444f0e   095f37015706                               "kube-scheduler --au…"   2 minutes ago        Up 2 minutes                              k8s_kube-scheduler_kube-scheduler-ingress-addon-legacy-632000_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0
	b856e4189087   k8s.gcr.io/pause:3.2                       "/pause"                 2 minutes ago        Up 2 minutes                              k8s_POD_kube-scheduler-ingress-addon-legacy-632000_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0
	1408dace2949   k8s.gcr.io/pause:3.2                       "/pause"                 2 minutes ago        Up 2 minutes                              k8s_POD_kube-controller-manager-ingress-addon-legacy-632000_kube-system_b395a1e17534e69e27827b1f8d737725_0
	67c22d5b9364   k8s.gcr.io/pause:3.2                       "/pause"                 2 minutes ago        Up 2 minutes                              k8s_POD_kube-apiserver-ingress-addon-legacy-632000_kube-system_36bf945afdf7c8fc8d73074b2bf4e3c3_0
	2eeada1f2874   k8s.gcr.io/pause:3.2                       "/pause"                 2 minutes ago        Up 2 minutes                              k8s_POD_etcd-ingress-addon-legacy-632000_kube-system_67990b7e3aec8d2152506f826f8ae958_0
	time="2024-02-13T22:55:59Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	
	
	==> coredns [dbe24f6da640] <==
	[INFO] 172.17.0.1:52997 - 55944 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032334s
	[INFO] 172.17.0.1:60963 - 60081 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000013709s
	[INFO] 172.17.0.1:52997 - 36231 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032626s
	[INFO] 172.17.0.1:60963 - 723 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000016958s
	[INFO] 172.17.0.1:52997 - 31609 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048709s
	[INFO] 172.17.0.1:60963 - 43515 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000012125s
	[INFO] 172.17.0.1:60963 - 60521 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000032917s
	[INFO] 172.17.0.1:52997 - 62192 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029583s
	[INFO] 172.17.0.1:52997 - 52988 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000037542s
	[INFO] 172.17.0.1:60963 - 38525 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000009166s
	[INFO] 172.17.0.1:60963 - 58835 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000017876s
	[INFO] 172.17.0.1:57344 - 17544 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000015166s
	[INFO] 172.17.0.1:34989 - 34718 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000009959s
	[INFO] 172.17.0.1:57344 - 1655 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000018334s
	[INFO] 172.17.0.1:57344 - 52005 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000012708s
	[INFO] 172.17.0.1:34989 - 13256 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000012959s
	[INFO] 172.17.0.1:34989 - 35729 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000030376s
	[INFO] 172.17.0.1:57344 - 5635 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000012291s
	[INFO] 172.17.0.1:57344 - 11135 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011917s
	[INFO] 172.17.0.1:34989 - 15080 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000012333s
	[INFO] 172.17.0.1:34989 - 50771 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038834s
	[INFO] 172.17.0.1:57344 - 21120 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012667s
	[INFO] 172.17.0.1:57344 - 41480 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000030292s
	[INFO] 172.17.0.1:34989 - 13227 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000008958s
	[INFO] 172.17.0.1:34989 - 12930 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00001225s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-632000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-632000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb52fe04bc8b044b129ef2ff27607d20a9fceb93
	                    minikube.k8s.io/name=ingress-addon-legacy-632000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_13T14_53_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Feb 2024 22:53:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-632000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Feb 2024 22:55:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Feb 2024 22:55:32 +0000   Tue, 13 Feb 2024 22:53:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Feb 2024 22:55:32 +0000   Tue, 13 Feb 2024 22:53:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Feb 2024 22:55:32 +0000   Tue, 13 Feb 2024 22:53:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Feb 2024 22:55:32 +0000   Tue, 13 Feb 2024 22:54:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-632000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4002812Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4002812Ki
	  pods:               110
	System Info:
	  Machine ID:                 e428841ca3454d2aa5c8802932dcef89
	  System UUID:                e428841ca3454d2aa5c8802932dcef89
	  Boot ID:                    7edbbe24-b7d8-4e03-bdaa-da41a746dc55
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-nvqwt                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 coredns-66bff467f8-lm8qq                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     107s
	  kube-system                 etcd-ingress-addon-legacy-632000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 kube-apiserver-ingress-addon-legacy-632000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-632000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 kube-proxy-wzt7r                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-scheduler-ingress-addon-legacy-632000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 117s  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  117s  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  117s  kubelet     Node ingress-addon-legacy-632000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s  kubelet     Node ingress-addon-legacy-632000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s  kubelet     Node ingress-addon-legacy-632000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                117s  kubelet     Node ingress-addon-legacy-632000 status is now: NodeReady
	  Normal  Starting                 106s  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Feb13 22:53] efi: memattr: Unexpected EFI Memory Attributes table version 2
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.648752] EINJ: EINJ table not found.
	[  +0.535735] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +3.623249] systemd-fstab-generator[477]: Ignoring "noauto" for root device
	[  +0.084521] systemd-fstab-generator[488]: Ignoring "noauto" for root device
	[  +0.126188] kauditd_printk_skb: 25 callbacks suppressed
	[  +0.343615] systemd-fstab-generator[707]: Ignoring "noauto" for root device
	[  +0.172875] systemd-fstab-generator[743]: Ignoring "noauto" for root device
	[  +0.068524] systemd-fstab-generator[754]: Ignoring "noauto" for root device
	[  +0.083042] systemd-fstab-generator[767]: Ignoring "noauto" for root device
	[  +4.276547] systemd-fstab-generator[1064]: Ignoring "noauto" for root device
	[  +1.471418] kauditd_printk_skb: 107 callbacks suppressed
	[  +3.913975] systemd-fstab-generator[1539]: Ignoring "noauto" for root device
	[  +8.677390] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.079797] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +5.801115] kauditd_printk_skb: 7 callbacks suppressed
	[  +0.024241] systemd-fstab-generator[2646]: Ignoring "noauto" for root device
	[Feb13 22:54] kauditd_printk_skb: 2 callbacks suppressed
	[ +32.764902] kauditd_printk_skb: 7 callbacks suppressed
	[Feb13 22:55] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [4fa5686032f4] <==
	raft2024/02/13 22:53:50 INFO: ed054832bd1917e1 became follower at term 0
	raft2024/02/13 22:53:50 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2024/02/13 22:53:50 INFO: ed054832bd1917e1 became follower at term 1
	raft2024/02/13 22:53:50 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2024-02-13 22:53:51.041445 W | auth: simple token is not cryptographically signed
	2024-02-13 22:53:51.093576 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-02-13 22:53:51.265338 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-02-13 22:53:51.270303 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-02-13 22:53:51.270481 I | embed: listening for peers on 192.168.105.6:2380
	2024-02-13 22:53:51.274507 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/02/13 22:53:51 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2024-02-13 22:53:51.274890 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	raft2024/02/13 22:53:51 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2024/02/13 22:53:51 INFO: ed054832bd1917e1 became candidate at term 2
	raft2024/02/13 22:53:51 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2024/02/13 22:53:51 INFO: ed054832bd1917e1 became leader at term 2
	raft2024/02/13 22:53:51 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2024-02-13 22:53:51.835726 I | etcdserver: setting up the initial cluster version to 3.4
	2024-02-13 22:53:51.837849 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-02-13 22:53:51.838142 I | etcdserver/api: enabled capabilities for version 3.4
	2024-02-13 22:53:51.838213 I | etcdserver: published {Name:ingress-addon-legacy-632000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2024-02-13 22:53:51.838440 I | embed: ready to serve client requests
	2024-02-13 22:53:51.841964 I | embed: serving client requests on 192.168.105.6:2379
	2024-02-13 22:53:51.842107 I | embed: ready to serve client requests
	2024-02-13 22:53:51.844178 I | embed: serving client requests on 127.0.0.1:2379
	
	
	==> kernel <==
	 22:55:59 up 2 min,  0 users,  load average: 0.46, 0.32, 0.13
	Linux ingress-addon-legacy-632000 5.10.57 #1 SMP PREEMPT Thu Dec 28 19:03:47 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [48d0c0a44184] <==
	I0213 22:53:53.321922       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	E0213 22:53:53.344225       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.105.6, ResourceVersion: 0, AdditionalErrorMsg: 
	I0213 22:53:53.409223       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0213 22:53:53.409223       1 cache.go:39] Caches are synced for autoregister controller
	I0213 22:53:53.409231       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0213 22:53:53.409239       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0213 22:53:53.422835       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0213 22:53:54.300744       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0213 22:53:54.300834       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0213 22:53:54.320388       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0213 22:53:54.332246       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0213 22:53:54.332278       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0213 22:53:54.465299       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0213 22:53:54.475528       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0213 22:53:54.578100       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I0213 22:53:54.578643       1 controller.go:609] quota admission added evaluator for: endpoints
	I0213 22:53:54.580343       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0213 22:53:55.616434       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0213 22:53:55.906910       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0213 22:53:56.084898       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0213 22:54:02.296938       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0213 22:54:12.469468       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0213 22:54:12.480304       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0213 22:54:47.497862       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0213 22:55:16.367802       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [e63a5c2d4fbe] <==
	I0213 22:54:12.482449       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	W0213 22:54:12.482508       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-632000. Assuming now as a timestamp.
	I0213 22:54:12.482562       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I0213 22:54:12.482700       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0213 22:54:12.482829       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-632000", UID:"ff9f07df-8982-400e-9d80-540b0ddb91d3", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-632000 event: Registered Node ingress-addon-legacy-632000 in Controller
	I0213 22:54:12.488364       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"cbcaf77f-d960-45fc-8712-c35daf3db7a9", APIVersion:"apps/v1", ResourceVersion:"317", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-lm8qq
	I0213 22:54:12.503083       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"dca3274f-e6dc-495e-94cc-d5ca285d39e3", APIVersion:"apps/v1", ResourceVersion:"211", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-wzt7r
	I0213 22:54:12.613051       1 shared_informer.go:230] Caches are synced for endpoint 
	I0213 22:54:12.645016       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0213 22:54:12.717026       1 shared_informer.go:230] Caches are synced for HPA 
	I0213 22:54:12.767140       1 shared_informer.go:230] Caches are synced for resource quota 
	I0213 22:54:12.769158       1 shared_informer.go:230] Caches are synced for attach detach 
	I0213 22:54:12.772092       1 shared_informer.go:230] Caches are synced for resource quota 
	I0213 22:54:12.868754       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0213 22:54:12.868767       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0213 22:54:12.871471       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0213 22:54:47.500736       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"71b231ad-f97e-4ef6-b8c7-79b5c7738be8", APIVersion:"apps/v1", ResourceVersion:"436", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0213 22:54:47.509898       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"f7de7a57-1073-4df0-b29d-bcb1c6fb719c", APIVersion:"apps/v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-qb9n6
	I0213 22:54:47.511316       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"e2384413-b0c3-440a-b78f-23df405ff03d", APIVersion:"batch/v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-thhw6
	I0213 22:54:47.519478       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"46e52c5b-9afe-4c27-85b5-9438b61904cd", APIVersion:"batch/v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-g7nmc
	I0213 22:54:50.773832       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"e2384413-b0c3-440a-b78f-23df405ff03d", APIVersion:"batch/v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0213 22:54:50.791305       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"46e52c5b-9afe-4c27-85b5-9438b61904cd", APIVersion:"batch/v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0213 22:55:25.657922       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"40954332-b7bb-41c4-9def-4ecf64cb4bfd", APIVersion:"apps/v1", ResourceVersion:"571", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0213 22:55:25.666489       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"d5196980-7829-465e-abb8-3241565178f1", APIVersion:"apps/v1", ResourceVersion:"572", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-nvqwt
	E0213 22:55:57.601604       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-g4l4v" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	
	==> kube-proxy [23570a6fb254] <==
	W0213 22:54:13.098675       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0213 22:54:13.103417       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I0213 22:54:13.103434       1 server_others.go:186] Using iptables Proxier.
	I0213 22:54:13.103562       1 server.go:583] Version: v1.18.20
	I0213 22:54:13.104869       1 config.go:315] Starting service config controller
	I0213 22:54:13.108978       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0213 22:54:13.108574       1 config.go:133] Starting endpoints config controller
	I0213 22:54:13.111638       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0213 22:54:13.209141       1 shared_informer.go:230] Caches are synced for service config 
	I0213 22:54:13.211762       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [41441f444f0e] <==
	W0213 22:53:53.325360       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0213 22:53:53.325377       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0213 22:53:53.357591       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0213 22:53:53.357677       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0213 22:53:53.358817       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0213 22:53:53.359012       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0213 22:53:53.360077       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0213 22:53:53.360223       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0213 22:53:53.362270       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0213 22:53:53.362949       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0213 22:53:53.363015       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0213 22:53:53.363071       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0213 22:53:53.363093       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0213 22:53:53.363111       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0213 22:53:53.363564       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0213 22:53:53.363615       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0213 22:53:53.363669       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0213 22:53:53.363709       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0213 22:53:53.363823       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0213 22:53:53.363899       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0213 22:53:54.213545       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0213 22:53:54.303844       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0213 22:53:54.352334       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0213 22:53:54.359856       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0213 22:53:54.760196       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-02-13 22:53:27 UTC, ends at Tue 2024-02-13 22:56:00 UTC. --
	Feb 13 22:55:39 ingress-addon-legacy-632000 kubelet[2652]: E0213 22:55:39.394626    2652 pod_workers.go:191] Error syncing pod dd9133f6-38c8-4be6-9b9f-06500b09e5bc ("hello-world-app-5f5d8b66bb-nvqwt_default(dd9133f6-38c8-4be6-9b9f-06500b09e5bc)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-nvqwt_default(dd9133f6-38c8-4be6-9b9f-06500b09e5bc)"
	Feb 13 22:55:40 ingress-addon-legacy-632000 kubelet[2652]: W0213 22:55:40.407765    2652 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-nvqwt through plugin: invalid network status for
	Feb 13 22:55:40 ingress-addon-legacy-632000 kubelet[2652]: I0213 22:55:40.413889    2652 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 0ad8311e8a67c86dba09155c404fa34f6ee3dfc7c8be35c02538908a0ecc9597
	Feb 13 22:55:40 ingress-addon-legacy-632000 kubelet[2652]: E0213 22:55:40.414376    2652 pod_workers.go:191] Error syncing pod dd9133f6-38c8-4be6-9b9f-06500b09e5bc ("hello-world-app-5f5d8b66bb-nvqwt_default(dd9133f6-38c8-4be6-9b9f-06500b09e5bc)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-nvqwt_default(dd9133f6-38c8-4be6-9b9f-06500b09e5bc)"
	Feb 13 22:55:41 ingress-addon-legacy-632000 kubelet[2652]: I0213 22:55:41.087175    2652 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-4cnrd" (UniqueName: "kubernetes.io/secret/4149ee13-5f9f-4ee0-91bb-19b876455e3e-minikube-ingress-dns-token-4cnrd") pod "4149ee13-5f9f-4ee0-91bb-19b876455e3e" (UID: "4149ee13-5f9f-4ee0-91bb-19b876455e3e")
	Feb 13 22:55:41 ingress-addon-legacy-632000 kubelet[2652]: I0213 22:55:41.091541    2652 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4149ee13-5f9f-4ee0-91bb-19b876455e3e-minikube-ingress-dns-token-4cnrd" (OuterVolumeSpecName: "minikube-ingress-dns-token-4cnrd") pod "4149ee13-5f9f-4ee0-91bb-19b876455e3e" (UID: "4149ee13-5f9f-4ee0-91bb-19b876455e3e"). InnerVolumeSpecName "minikube-ingress-dns-token-4cnrd". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 13 22:55:41 ingress-addon-legacy-632000 kubelet[2652]: I0213 22:55:41.187395    2652 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-4cnrd" (UniqueName: "kubernetes.io/secret/4149ee13-5f9f-4ee0-91bb-19b876455e3e-minikube-ingress-dns-token-4cnrd") on node "ingress-addon-legacy-632000" DevicePath ""
	Feb 13 22:55:42 ingress-addon-legacy-632000 kubelet[2652]: I0213 22:55:42.436737    2652 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 68e5017f86122b3beff82066e176bedd1cc31bc24ec35b6854332efb9e9af6bb
	Feb 13 22:55:52 ingress-addon-legacy-632000 kubelet[2652]: I0213 22:55:52.314308    2652 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 0ad8311e8a67c86dba09155c404fa34f6ee3dfc7c8be35c02538908a0ecc9597
	Feb 13 22:55:52 ingress-addon-legacy-632000 kubelet[2652]: W0213 22:55:52.462204    2652 container.go:412] Failed to create summary reader for "/kubepods/besteffort/poddd9133f6-38c8-4be6-9b9f-06500b09e5bc/44733c84a0748d4396bd34e3622d1e76a26db416c40f17f2515872cd6430b7b3": none of the resources are being tracked.
	Feb 13 22:55:52 ingress-addon-legacy-632000 kubelet[2652]: W0213 22:55:52.572321    2652 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-nvqwt through plugin: invalid network status for
	Feb 13 22:55:52 ingress-addon-legacy-632000 kubelet[2652]: I0213 22:55:52.574281    2652 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 0ad8311e8a67c86dba09155c404fa34f6ee3dfc7c8be35c02538908a0ecc9597
	Feb 13 22:55:52 ingress-addon-legacy-632000 kubelet[2652]: I0213 22:55:52.575434    2652 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 44733c84a0748d4396bd34e3622d1e76a26db416c40f17f2515872cd6430b7b3
	Feb 13 22:55:52 ingress-addon-legacy-632000 kubelet[2652]: E0213 22:55:52.578061    2652 pod_workers.go:191] Error syncing pod dd9133f6-38c8-4be6-9b9f-06500b09e5bc ("hello-world-app-5f5d8b66bb-nvqwt_default(dd9133f6-38c8-4be6-9b9f-06500b09e5bc)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-nvqwt_default(dd9133f6-38c8-4be6-9b9f-06500b09e5bc)"
	Feb 13 22:55:52 ingress-addon-legacy-632000 kubelet[2652]: E0213 22:55:52.865900    2652 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-qb9n6.17b38e203fcd9bce", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-qb9n6", UID:"c33f3e5f-2d57-4ab6-9ca4-65bef5793a16", APIVersion:"v1", ResourceVersion:"449", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-632000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16b1abe3380ebce, ext:116981308269, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16b1abe3380ebce, ext:116981308269, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-qb9n6.17b38e203fcd9bce" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Feb 13 22:55:52 ingress-addon-legacy-632000 kubelet[2652]: E0213 22:55:52.874268    2652 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-qb9n6.17b38e203fcd9bce", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-qb9n6", UID:"c33f3e5f-2d57-4ab6-9ca4-65bef5793a16", APIVersion:"v1", ResourceVersion:"449", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-632000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16b1abe3380ebce, ext:116981308269, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16b1abe33af920d, ext:116984365484, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-qb9n6.17b38e203fcd9bce" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Feb 13 22:55:53 ingress-addon-legacy-632000 kubelet[2652]: W0213 22:55:53.590926    2652 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-nvqwt through plugin: invalid network status for
	Feb 13 22:55:55 ingress-addon-legacy-632000 kubelet[2652]: W0213 22:55:55.635887    2652 pod_container_deletor.go:77] Container "06b881ff4e2685f7cd5c50e3614584cc4a691e6b7353e4c66df2d439d7501cbe" not found in pod's containers
	Feb 13 22:55:57 ingress-addon-legacy-632000 kubelet[2652]: I0213 22:55:57.065497    2652 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-7gk6b" (UniqueName: "kubernetes.io/secret/c33f3e5f-2d57-4ab6-9ca4-65bef5793a16-ingress-nginx-token-7gk6b") pod "c33f3e5f-2d57-4ab6-9ca4-65bef5793a16" (UID: "c33f3e5f-2d57-4ab6-9ca4-65bef5793a16")
	Feb 13 22:55:57 ingress-addon-legacy-632000 kubelet[2652]: I0213 22:55:57.065609    2652 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/c33f3e5f-2d57-4ab6-9ca4-65bef5793a16-webhook-cert") pod "c33f3e5f-2d57-4ab6-9ca4-65bef5793a16" (UID: "c33f3e5f-2d57-4ab6-9ca4-65bef5793a16")
	Feb 13 22:55:57 ingress-addon-legacy-632000 kubelet[2652]: I0213 22:55:57.078563    2652 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c33f3e5f-2d57-4ab6-9ca4-65bef5793a16-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "c33f3e5f-2d57-4ab6-9ca4-65bef5793a16" (UID: "c33f3e5f-2d57-4ab6-9ca4-65bef5793a16"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 13 22:55:57 ingress-addon-legacy-632000 kubelet[2652]: I0213 22:55:57.078778    2652 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c33f3e5f-2d57-4ab6-9ca4-65bef5793a16-ingress-nginx-token-7gk6b" (OuterVolumeSpecName: "ingress-nginx-token-7gk6b") pod "c33f3e5f-2d57-4ab6-9ca4-65bef5793a16" (UID: "c33f3e5f-2d57-4ab6-9ca4-65bef5793a16"). InnerVolumeSpecName "ingress-nginx-token-7gk6b". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 13 22:55:57 ingress-addon-legacy-632000 kubelet[2652]: I0213 22:55:57.166090    2652 reconciler.go:319] Volume detached for volume "ingress-nginx-token-7gk6b" (UniqueName: "kubernetes.io/secret/c33f3e5f-2d57-4ab6-9ca4-65bef5793a16-ingress-nginx-token-7gk6b") on node "ingress-addon-legacy-632000" DevicePath ""
	Feb 13 22:55:57 ingress-addon-legacy-632000 kubelet[2652]: I0213 22:55:57.166191    2652 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/c33f3e5f-2d57-4ab6-9ca4-65bef5793a16-webhook-cert") on node "ingress-addon-legacy-632000" DevicePath ""
	Feb 13 22:55:58 ingress-addon-legacy-632000 kubelet[2652]: W0213 22:55:58.335381    2652 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/c33f3e5f-2d57-4ab6-9ca4-65bef5793a16/volumes" does not exist
	
	
	==> storage-provisioner [c4876097e181] <==
	I0213 22:54:19.440845       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0213 22:54:19.444827       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0213 22:54:19.444885       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0213 22:54:19.447473       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0213 22:54:19.447687       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"70b3d36d-0c20-4ef8-9b57-a6f8882f6091", APIVersion:"v1", ResourceVersion:"373", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-632000_b7f43027-ed71-4d6e-bdca-c1ac58564f58 became leader
	I0213 22:54:19.447768       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-632000_b7f43027-ed71-4d6e-bdca-c1ac58564f58!
	I0213 22:54:19.549780       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-632000_b7f43027-ed71-4d6e-bdca-c1ac58564f58!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-632000 -n ingress-addon-legacy-632000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-632000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (56.98s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-744000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
E0213 15:00:08.351495    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt: no such file or directory
E0213 15:00:13.473869    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-744000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.688617708s)

                                                
                                                
-- stdout --
	* [mount-start-1-744000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-744000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-744000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-744000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-744000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-744000 -n mount-start-1-744000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-744000 -n mount-start-1-744000: exit status 7 (70.412084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-744000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.76s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-078000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
E0213 15:00:23.716310    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt: no such file or directory
multinode_test.go:86: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-078000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.732438458s)

                                                
                                                
-- stdout --
	* [multinode-078000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-078000 in cluster multinode-078000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-078000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:00:16.954400    2769 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:00:16.954529    2769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:00:16.954532    2769 out.go:304] Setting ErrFile to fd 2...
	I0213 15:00:16.954535    2769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:00:16.954686    2769 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:00:16.955719    2769 out.go:298] Setting JSON to false
	I0213 15:00:16.971781    2769 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1638,"bootTime":1707863578,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:00:16.971837    2769 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:00:16.977078    2769 out.go:177] * [multinode-078000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:00:16.985039    2769 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:00:16.989074    2769 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:00:16.985085    2769 notify.go:220] Checking for updates...
	I0213 15:00:16.994983    2769 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:00:16.998043    2769 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:00:16.999416    2769 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:00:17.002077    2769 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:00:17.005231    2769 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:00:17.009903    2769 out.go:177] * Using the qemu2 driver based on user configuration
	I0213 15:00:17.017036    2769 start.go:298] selected driver: qemu2
	I0213 15:00:17.017042    2769 start.go:902] validating driver "qemu2" against <nil>
	I0213 15:00:17.017048    2769 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:00:17.019315    2769 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:00:17.022063    2769 out.go:177] * Automatically selected the socket_vmnet network
	I0213 15:00:17.025095    2769 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:00:17.025135    2769 cni.go:84] Creating CNI manager for ""
	I0213 15:00:17.025145    2769 cni.go:136] 0 nodes found, recommending kindnet
	I0213 15:00:17.025149    2769 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0213 15:00:17.025155    2769 start_flags.go:321] config:
	{Name:multinode-078000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-078000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs:}
	I0213 15:00:17.029636    2769 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:00:17.035957    2769 out.go:177] * Starting control plane node multinode-078000 in cluster multinode-078000
	I0213 15:00:17.040044    2769 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:00:17.040074    2769 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 15:00:17.040091    2769 cache.go:56] Caching tarball of preloaded images
	I0213 15:00:17.040162    2769 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:00:17.040168    2769 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 15:00:17.040430    2769 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/multinode-078000/config.json ...
	I0213 15:00:17.040444    2769 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/multinode-078000/config.json: {Name:mkcb49149bef35164e2ad65cd77e2ce7038eaab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:00:17.040673    2769 start.go:365] acquiring machines lock for multinode-078000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:00:17.040707    2769 start.go:369] acquired machines lock for "multinode-078000" in 28.416µs
	I0213 15:00:17.040719    2769 start.go:93] Provisioning new machine with config: &{Name:multinode-078000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-078000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:00:17.040754    2769 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:00:17.049015    2769 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0213 15:00:17.067389    2769 start.go:159] libmachine.API.Create for "multinode-078000" (driver="qemu2")
	I0213 15:00:17.067429    2769 client.go:168] LocalClient.Create starting
	I0213 15:00:17.067495    2769 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:00:17.067526    2769 main.go:141] libmachine: Decoding PEM data...
	I0213 15:00:17.067536    2769 main.go:141] libmachine: Parsing certificate...
	I0213 15:00:17.067578    2769 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:00:17.067600    2769 main.go:141] libmachine: Decoding PEM data...
	I0213 15:00:17.067609    2769 main.go:141] libmachine: Parsing certificate...
	I0213 15:00:17.067957    2769 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:00:17.190235    2769 main.go:141] libmachine: Creating SSH key...
	I0213 15:00:17.251729    2769 main.go:141] libmachine: Creating Disk image...
	I0213 15:00:17.251735    2769 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:00:17.251904    2769 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/disk.qcow2
	I0213 15:00:17.264389    2769 main.go:141] libmachine: STDOUT: 
	I0213 15:00:17.264408    2769 main.go:141] libmachine: STDERR: 
	I0213 15:00:17.264461    2769 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/disk.qcow2 +20000M
	I0213 15:00:17.275294    2769 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:00:17.275311    2769 main.go:141] libmachine: STDERR: 
	I0213 15:00:17.275328    2769 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/disk.qcow2
	I0213 15:00:17.275334    2769 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:00:17.275373    2769 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:e2:ce:67:c9:68 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/disk.qcow2
	I0213 15:00:17.277069    2769 main.go:141] libmachine: STDOUT: 
	I0213 15:00:17.277083    2769 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:00:17.277103    2769 client.go:171] LocalClient.Create took 209.675292ms
	I0213 15:00:19.279301    2769 start.go:128] duration metric: createHost completed in 2.23858275s
	I0213 15:00:19.279423    2769 start.go:83] releasing machines lock for "multinode-078000", held for 2.238768167s
	W0213 15:00:19.279529    2769 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:00:19.290606    2769 out.go:177] * Deleting "multinode-078000" in qemu2 ...
	W0213 15:00:19.312936    2769 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:00:19.312961    2769 start.go:709] Will try again in 5 seconds ...
	I0213 15:00:24.315029    2769 start.go:365] acquiring machines lock for multinode-078000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:00:24.315441    2769 start.go:369] acquired machines lock for "multinode-078000" in 329.833µs
	I0213 15:00:24.315554    2769 start.go:93] Provisioning new machine with config: &{Name:multinode-078000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-078000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:00:24.315825    2769 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:00:24.329600    2769 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0213 15:00:24.378318    2769 start.go:159] libmachine.API.Create for "multinode-078000" (driver="qemu2")
	I0213 15:00:24.378365    2769 client.go:168] LocalClient.Create starting
	I0213 15:00:24.378462    2769 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:00:24.378522    2769 main.go:141] libmachine: Decoding PEM data...
	I0213 15:00:24.378538    2769 main.go:141] libmachine: Parsing certificate...
	I0213 15:00:24.378586    2769 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:00:24.378630    2769 main.go:141] libmachine: Decoding PEM data...
	I0213 15:00:24.378641    2769 main.go:141] libmachine: Parsing certificate...
	I0213 15:00:24.379127    2769 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:00:24.512989    2769 main.go:141] libmachine: Creating SSH key...
	I0213 15:00:24.582927    2769 main.go:141] libmachine: Creating Disk image...
	I0213 15:00:24.582935    2769 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:00:24.583131    2769 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/disk.qcow2
	I0213 15:00:24.595565    2769 main.go:141] libmachine: STDOUT: 
	I0213 15:00:24.595599    2769 main.go:141] libmachine: STDERR: 
	I0213 15:00:24.595654    2769 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/disk.qcow2 +20000M
	I0213 15:00:24.606325    2769 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:00:24.606339    2769 main.go:141] libmachine: STDERR: 
	I0213 15:00:24.606354    2769 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/disk.qcow2
	I0213 15:00:24.606364    2769 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:00:24.606415    2769 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:bc:65:86:d6:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/disk.qcow2
	I0213 15:00:24.608149    2769 main.go:141] libmachine: STDOUT: 
	I0213 15:00:24.608168    2769 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:00:24.608179    2769 client.go:171] LocalClient.Create took 229.813667ms
	I0213 15:00:26.610296    2769 start.go:128] duration metric: createHost completed in 2.294486208s
	I0213 15:00:26.610391    2769 start.go:83] releasing machines lock for "multinode-078000", held for 2.294989958s
	W0213 15:00:26.610828    2769 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-078000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-078000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:00:26.623517    2769 out.go:177] 
	W0213 15:00:26.628551    2769 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:00:26.628579    2769 out.go:239] * 
	* 
	W0213 15:00:26.631691    2769 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:00:26.642453    2769 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:88: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-078000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000: exit status 7 (67.956375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-078000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.80s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (70.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-078000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:509: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-078000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (130.717583ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-078000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:511: failed to create busybox deployment to multinode cluster
multinode_test.go:514: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-078000 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-078000 -- rollout status deployment/busybox: exit status 1 (58.63675ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-078000"

                                                
                                                
** /stderr **
multinode_test.go:516: failed to deploy busybox to multinode cluster
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-078000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-078000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.817667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-078000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-078000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-078000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.336584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-078000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-078000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-078000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.694541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-078000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-078000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-078000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.224333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-078000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-078000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-078000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.304291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-078000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-078000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-078000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.355417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-078000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
E0213 15:00:44.198180    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt: no such file or directory
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-078000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-078000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.654583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-078000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-078000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-078000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.179084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-078000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-078000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-078000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.669625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-078000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
E0213 15:01:08.465516    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/client.crt: no such file or directory
E0213 15:01:25.159458    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt: no such file or directory
E0213 15:01:36.172813    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/client.crt: no such file or directory
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-078000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-078000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.633583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-078000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:540: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:544: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-078000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:544: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-078000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.44925ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-078000"

                                                
                                                
** /stderr **
multinode_test.go:546: failed get Pod names
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-078000 -- exec  -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-078000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.279208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-078000"

                                                
                                                
** /stderr **
multinode_test.go:554: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-078000 -- exec  -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-078000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.672917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-078000"

                                                
                                                
** /stderr **
multinode_test.go:564: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-078000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-078000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.110583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-078000"

                                                
                                                
** /stderr **
multinode_test.go:572: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000: exit status 7 (31.830875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-078000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (70.31s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-078000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:580: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-078000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.413458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-078000"

                                                
                                                
** /stderr **
multinode_test.go:582: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000: exit status 7 (31.817083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-078000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-078000 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-078000 -v 3 --alsologtostderr: exit status 89 (42.733625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-078000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:01:37.151554    2852 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:01:37.151758    2852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:01:37.151761    2852 out.go:304] Setting ErrFile to fd 2...
	I0213 15:01:37.151763    2852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:01:37.151881    2852 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:01:37.152106    2852 mustload.go:65] Loading cluster: multinode-078000
	I0213 15:01:37.152290    2852 config.go:182] Loaded profile config "multinode-078000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:01:37.156287    2852 out.go:177] * The control plane node must be running for this command
	I0213 15:01:37.160380    2852 out.go:177]   To start a cluster, run: "minikube start -p multinode-078000"

                                                
                                                
** /stderr **
multinode_test.go:113: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-078000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000: exit status 7 (31.934375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-078000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-078000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:211: (dbg) Non-zero exit: kubectl --context multinode-078000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (32.055292ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-078000

                                                
                                                
** /stderr **
multinode_test.go:213: failed to 'kubectl get nodes' with args "kubectl --context multinode-078000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:220: failed to decode json from label list: args "kubectl --context multinode-078000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000: exit status 7 (31.82075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-078000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:156: expected profile "multinode-078000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-078000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-078000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidd
en\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-078000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\
",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPat
h\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\"},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000: exit status 7 (31.934917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-078000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-078000 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-078000 status --output json --alsologtostderr: exit status 7 (31.68775ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-078000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:01:37.394201    2865 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:01:37.394353    2865 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:01:37.394356    2865 out.go:304] Setting ErrFile to fd 2...
	I0213 15:01:37.394358    2865 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:01:37.394504    2865 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:01:37.394620    2865 out.go:298] Setting JSON to true
	I0213 15:01:37.394632    2865 mustload.go:65] Loading cluster: multinode-078000
	I0213 15:01:37.394695    2865 notify.go:220] Checking for updates...
	I0213 15:01:37.394829    2865 config.go:182] Loaded profile config "multinode-078000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:01:37.394834    2865 status.go:255] checking status of multinode-078000 ...
	I0213 15:01:37.395069    2865 status.go:330] multinode-078000 host status = "Stopped" (err=<nil>)
	I0213 15:01:37.395073    2865 status.go:343] host is not running, skipping remaining checks
	I0213 15:01:37.395075    2865 status.go:257] multinode-078000 status: &{Name:multinode-078000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:181: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-078000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000: exit status 7 (31.5535ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-078000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-078000 node stop m03
multinode_test.go:238: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-078000 node stop m03: exit status 85 (47.72125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:240: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-078000 node stop m03": exit status 85
multinode_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-078000 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-078000 status: exit status 7 (31.810291ms)

                                                
                                                
-- stdout --
	multinode-078000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-078000 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-078000 status --alsologtostderr: exit status 7 (31.841458ms)

                                                
                                                
-- stdout --
	multinode-078000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:01:37.538038    2873 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:01:37.538171    2873 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:01:37.538174    2873 out.go:304] Setting ErrFile to fd 2...
	I0213 15:01:37.538177    2873 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:01:37.538310    2873 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:01:37.538419    2873 out.go:298] Setting JSON to false
	I0213 15:01:37.538430    2873 mustload.go:65] Loading cluster: multinode-078000
	I0213 15:01:37.538498    2873 notify.go:220] Checking for updates...
	I0213 15:01:37.538616    2873 config.go:182] Loaded profile config "multinode-078000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:01:37.538621    2873 status.go:255] checking status of multinode-078000 ...
	I0213 15:01:37.538811    2873 status.go:330] multinode-078000 host status = "Stopped" (err=<nil>)
	I0213 15:01:37.538814    2873 status.go:343] host is not running, skipping remaining checks
	I0213 15:01:37.538816    2873 status.go:257] multinode-078000 status: &{Name:multinode-078000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:257: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-078000 status --alsologtostderr": multinode-078000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000: exit status 7 (31.86825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-078000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-078000 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-078000 node start m03 --alsologtostderr: exit status 85 (47.994833ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:01:37.601668    2877 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:01:37.601911    2877 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:01:37.601915    2877 out.go:304] Setting ErrFile to fd 2...
	I0213 15:01:37.601917    2877 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:01:37.602044    2877 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:01:37.602271    2877 mustload.go:65] Loading cluster: multinode-078000
	I0213 15:01:37.602457    2877 config.go:182] Loaded profile config "multinode-078000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:01:37.607060    2877 out.go:177] 
	W0213 15:01:37.610016    2877 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0213 15:01:37.610020    2877 out.go:239] * 
	* 
	W0213 15:01:37.611513    2877 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:01:37.615006    2877 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0213 15:01:37.601668    2877 out.go:291] Setting OutFile to fd 1 ...
I0213 15:01:37.601911    2877 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 15:01:37.601915    2877 out.go:304] Setting ErrFile to fd 2...
I0213 15:01:37.601917    2877 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 15:01:37.602044    2877 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
I0213 15:01:37.602271    2877 mustload.go:65] Loading cluster: multinode-078000
I0213 15:01:37.602457    2877 config.go:182] Loaded profile config "multinode-078000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 15:01:37.607060    2877 out.go:177] 
W0213 15:01:37.610016    2877 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0213 15:01:37.610020    2877 out.go:239] * 
* 
W0213 15:01:37.611513    2877 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0213 15:01:37.615006    2877 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-078000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:289: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-078000 status
multinode_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-078000 status: exit status 7 (32.160583ms)

                                                
                                                
-- stdout --
	multinode-078000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:291: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-078000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000: exit status 7 (32.125458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-078000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-078000
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-078000
multinode_test.go:323: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-078000 --wait=true -v=8 --alsologtostderr
multinode_test.go:323: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-078000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.187083916s)

                                                
                                                
-- stdout --
	* [multinode-078000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-078000 in cluster multinode-078000
	* Restarting existing qemu2 VM for "multinode-078000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-078000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:01:37.806034    2887 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:01:37.806182    2887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:01:37.806185    2887 out.go:304] Setting ErrFile to fd 2...
	I0213 15:01:37.806188    2887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:01:37.806331    2887 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:01:37.807292    2887 out.go:298] Setting JSON to false
	I0213 15:01:37.823203    2887 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1719,"bootTime":1707863578,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:01:37.823263    2887 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:01:37.827105    2887 out.go:177] * [multinode-078000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:01:37.834015    2887 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:01:37.838029    2887 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:01:37.834054    2887 notify.go:220] Checking for updates...
	I0213 15:01:37.844986    2887 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:01:37.849042    2887 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:01:37.850511    2887 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:01:37.853995    2887 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:01:37.857379    2887 config.go:182] Loaded profile config "multinode-078000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:01:37.857432    2887 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:01:37.861871    2887 out.go:177] * Using the qemu2 driver based on existing profile
	I0213 15:01:37.869018    2887 start.go:298] selected driver: qemu2
	I0213 15:01:37.869024    2887 start.go:902] validating driver "qemu2" against &{Name:multinode-078000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:multinode-078000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:01:37.869090    2887 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:01:37.871345    2887 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:01:37.871390    2887 cni.go:84] Creating CNI manager for ""
	I0213 15:01:37.871394    2887 cni.go:136] 1 nodes found, recommending kindnet
	I0213 15:01:37.871399    2887 start_flags.go:321] config:
	{Name:multinode-078000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-078000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:01:37.875837    2887 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:01:37.883964    2887 out.go:177] * Starting control plane node multinode-078000 in cluster multinode-078000
	I0213 15:01:37.888037    2887 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:01:37.888058    2887 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 15:01:37.888068    2887 cache.go:56] Caching tarball of preloaded images
	I0213 15:01:37.888140    2887 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:01:37.888152    2887 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 15:01:37.888231    2887 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/multinode-078000/config.json ...
	I0213 15:01:37.888692    2887 start.go:365] acquiring machines lock for multinode-078000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:01:37.888724    2887 start.go:369] acquired machines lock for "multinode-078000" in 26.667µs
	I0213 15:01:37.888733    2887 start.go:96] Skipping create...Using existing machine configuration
	I0213 15:01:37.888739    2887 fix.go:54] fixHost starting: 
	I0213 15:01:37.888850    2887 fix.go:102] recreateIfNeeded on multinode-078000: state=Stopped err=<nil>
	W0213 15:01:37.888859    2887 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 15:01:37.892997    2887 out.go:177] * Restarting existing qemu2 VM for "multinode-078000" ...
	I0213 15:01:37.900979    2887 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:bc:65:86:d6:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/disk.qcow2
	I0213 15:01:37.902994    2887 main.go:141] libmachine: STDOUT: 
	I0213 15:01:37.903016    2887 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:01:37.903046    2887 fix.go:56] fixHost completed within 14.308375ms
	I0213 15:01:37.903051    2887 start.go:83] releasing machines lock for "multinode-078000", held for 14.323125ms
	W0213 15:01:37.903056    2887 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:01:37.903091    2887 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:01:37.903096    2887 start.go:709] Will try again in 5 seconds ...
	I0213 15:01:42.905147    2887 start.go:365] acquiring machines lock for multinode-078000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:01:42.905475    2887 start.go:369] acquired machines lock for "multinode-078000" in 227.792µs
	I0213 15:01:42.905591    2887 start.go:96] Skipping create...Using existing machine configuration
	I0213 15:01:42.905614    2887 fix.go:54] fixHost starting: 
	I0213 15:01:42.906296    2887 fix.go:102] recreateIfNeeded on multinode-078000: state=Stopped err=<nil>
	W0213 15:01:42.906320    2887 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 15:01:42.911747    2887 out.go:177] * Restarting existing qemu2 VM for "multinode-078000" ...
	I0213 15:01:42.915644    2887 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:bc:65:86:d6:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/disk.qcow2
	I0213 15:01:42.925083    2887 main.go:141] libmachine: STDOUT: 
	I0213 15:01:42.925162    2887 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:01:42.925241    2887 fix.go:56] fixHost completed within 19.626375ms
	I0213 15:01:42.925261    2887 start.go:83] releasing machines lock for "multinode-078000", held for 19.763416ms
	W0213 15:01:42.925441    2887 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-078000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-078000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:01:42.933685    2887 out.go:177] 
	W0213 15:01:42.937741    2887 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:01:42.937820    2887 out.go:239] * 
	* 
	W0213 15:01:42.940218    2887 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:01:42.950693    2887 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:325: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-078000" : exit status 80
multinode_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-078000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000: exit status 7 (34.64225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-078000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-078000 node delete m03
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-078000 node delete m03: exit status 89 (42.857916ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-078000"

                                                
                                                
-- /stdout --
multinode_test.go:424: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-078000 node delete m03": exit status 89
multinode_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-078000 status --alsologtostderr
multinode_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-078000 status --alsologtostderr: exit status 7 (31.594834ms)

                                                
                                                
-- stdout --
	multinode-078000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:01:43.140895    2901 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:01:43.141046    2901 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:01:43.141050    2901 out.go:304] Setting ErrFile to fd 2...
	I0213 15:01:43.141052    2901 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:01:43.141162    2901 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:01:43.141279    2901 out.go:298] Setting JSON to false
	I0213 15:01:43.141291    2901 mustload.go:65] Loading cluster: multinode-078000
	I0213 15:01:43.141352    2901 notify.go:220] Checking for updates...
	I0213 15:01:43.141485    2901 config.go:182] Loaded profile config "multinode-078000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:01:43.141489    2901 status.go:255] checking status of multinode-078000 ...
	I0213 15:01:43.141683    2901 status.go:330] multinode-078000 host status = "Stopped" (err=<nil>)
	I0213 15:01:43.141687    2901 status.go:343] host is not running, skipping remaining checks
	I0213 15:01:43.141689    2901 status.go:257] multinode-078000 status: &{Name:multinode-078000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:430: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-078000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000: exit status 7 (32.09025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-078000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-078000 stop
multinode_test.go:348: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-078000 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-078000 status: exit status 7 (32.654ms)

                                                
                                                
-- stdout --
	multinode-078000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-078000 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-078000 status --alsologtostderr: exit status 7 (31.801541ms)

                                                
                                                
-- stdout --
	multinode-078000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:01:43.299806    2909 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:01:43.299973    2909 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:01:43.299976    2909 out.go:304] Setting ErrFile to fd 2...
	I0213 15:01:43.299978    2909 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:01:43.300109    2909 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:01:43.300213    2909 out.go:298] Setting JSON to false
	I0213 15:01:43.300227    2909 mustload.go:65] Loading cluster: multinode-078000
	I0213 15:01:43.300282    2909 notify.go:220] Checking for updates...
	I0213 15:01:43.300423    2909 config.go:182] Loaded profile config "multinode-078000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:01:43.300428    2909 status.go:255] checking status of multinode-078000 ...
	I0213 15:01:43.300619    2909 status.go:330] multinode-078000 host status = "Stopped" (err=<nil>)
	I0213 15:01:43.300623    2909 status.go:343] host is not running, skipping remaining checks
	I0213 15:01:43.300625    2909 status.go:257] multinode-078000 status: &{Name:multinode-078000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:361: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-078000 status --alsologtostderr": multinode-078000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:365: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-078000 status --alsologtostderr": multinode-078000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000: exit status 7 (31.491833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-078000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-078000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:382: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-078000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.180851792s)

                                                
                                                
-- stdout --
	* [multinode-078000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-078000 in cluster multinode-078000
	* Restarting existing qemu2 VM for "multinode-078000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-078000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:01:43.362665    2913 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:01:43.362778    2913 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:01:43.362781    2913 out.go:304] Setting ErrFile to fd 2...
	I0213 15:01:43.362783    2913 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:01:43.362904    2913 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:01:43.363822    2913 out.go:298] Setting JSON to false
	I0213 15:01:43.379678    2913 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1725,"bootTime":1707863578,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:01:43.379750    2913 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:01:43.384940    2913 out.go:177] * [multinode-078000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:01:43.392986    2913 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:01:43.396951    2913 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:01:43.393032    2913 notify.go:220] Checking for updates...
	I0213 15:01:43.403932    2913 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:01:43.406968    2913 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:01:43.409988    2913 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:01:43.412947    2913 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:01:43.416308    2913 config.go:182] Loaded profile config "multinode-078000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:01:43.416572    2913 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:01:43.420952    2913 out.go:177] * Using the qemu2 driver based on existing profile
	I0213 15:01:43.427933    2913 start.go:298] selected driver: qemu2
	I0213 15:01:43.427940    2913 start.go:902] validating driver "qemu2" against &{Name:multinode-078000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:multinode-078000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:01:43.428007    2913 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:01:43.430253    2913 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:01:43.430299    2913 cni.go:84] Creating CNI manager for ""
	I0213 15:01:43.430304    2913 cni.go:136] 1 nodes found, recommending kindnet
	I0213 15:01:43.430310    2913 start_flags.go:321] config:
	{Name:multinode-078000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-078000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:01:43.434638    2913 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:01:43.439956    2913 out.go:177] * Starting control plane node multinode-078000 in cluster multinode-078000
	I0213 15:01:43.443944    2913 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:01:43.443956    2913 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 15:01:43.443964    2913 cache.go:56] Caching tarball of preloaded images
	I0213 15:01:43.444003    2913 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:01:43.444008    2913 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 15:01:43.444070    2913 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/multinode-078000/config.json ...
	I0213 15:01:43.444535    2913 start.go:365] acquiring machines lock for multinode-078000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:01:43.444561    2913 start.go:369] acquired machines lock for "multinode-078000" in 19.917µs
	I0213 15:01:43.444569    2913 start.go:96] Skipping create...Using existing machine configuration
	I0213 15:01:43.444576    2913 fix.go:54] fixHost starting: 
	I0213 15:01:43.444696    2913 fix.go:102] recreateIfNeeded on multinode-078000: state=Stopped err=<nil>
	W0213 15:01:43.444704    2913 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 15:01:43.448001    2913 out.go:177] * Restarting existing qemu2 VM for "multinode-078000" ...
	I0213 15:01:43.452011    2913 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:bc:65:86:d6:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/disk.qcow2
	I0213 15:01:43.453956    2913 main.go:141] libmachine: STDOUT: 
	I0213 15:01:43.453979    2913 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:01:43.454008    2913 fix.go:56] fixHost completed within 9.434ms
	I0213 15:01:43.454012    2913 start.go:83] releasing machines lock for "multinode-078000", held for 9.448041ms
	W0213 15:01:43.454018    2913 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:01:43.454069    2913 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:01:43.454074    2913 start.go:709] Will try again in 5 seconds ...
	I0213 15:01:48.456138    2913 start.go:365] acquiring machines lock for multinode-078000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:01:48.456523    2913 start.go:369] acquired machines lock for "multinode-078000" in 244.625µs
	I0213 15:01:48.456657    2913 start.go:96] Skipping create...Using existing machine configuration
	I0213 15:01:48.456678    2913 fix.go:54] fixHost starting: 
	I0213 15:01:48.457395    2913 fix.go:102] recreateIfNeeded on multinode-078000: state=Stopped err=<nil>
	W0213 15:01:48.457424    2913 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 15:01:48.461264    2913 out.go:177] * Restarting existing qemu2 VM for "multinode-078000" ...
	I0213 15:01:48.468416    2913 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:bc:65:86:d6:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/multinode-078000/disk.qcow2
	I0213 15:01:48.478670    2913 main.go:141] libmachine: STDOUT: 
	I0213 15:01:48.478745    2913 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:01:48.478862    2913 fix.go:56] fixHost completed within 22.182916ms
	I0213 15:01:48.478882    2913 start.go:83] releasing machines lock for "multinode-078000", held for 22.333791ms
	W0213 15:01:48.479119    2913 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-078000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-078000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:01:48.488231    2913 out.go:177] 
	W0213 15:01:48.491270    2913 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:01:48.491302    2913 out.go:239] * 
	* 
	W0213 15:01:48.493882    2913 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:01:48.500254    2913 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:384: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-078000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000: exit status 7 (68.838125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-078000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-078000
multinode_test.go:480: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-078000-m01 --driver=qemu2 
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-078000-m01 --driver=qemu2 : exit status 80 (9.83325725s)

                                                
                                                
-- stdout --
	* [multinode-078000-m01] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-078000-m01 in cluster multinode-078000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-078000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-078000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-078000-m02 --driver=qemu2 
multinode_test.go:488: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-078000-m02 --driver=qemu2 : exit status 80 (9.77844075s)

                                                
                                                
-- stdout --
	* [multinode-078000-m02] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-078000-m02 in cluster multinode-078000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-078000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-078000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:490: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-078000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:495: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-078000
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-078000: exit status 89 (83.99975ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-078000"

                                                
                                                
-- /stdout --
multinode_test.go:500: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-078000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-078000 -n multinode-078000: exit status 7 (33.054792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-078000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (19.87s)

                                                
                                    
x
+
TestPreload (9.96s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-959000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-959000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.79060625s)

                                                
                                                
-- stdout --
	* [test-preload-959000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-959000 in cluster test-preload-959000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-959000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:02:08.616495    2973 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:02:08.616622    2973 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:02:08.616625    2973 out.go:304] Setting ErrFile to fd 2...
	I0213 15:02:08.616628    2973 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:02:08.616752    2973 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:02:08.617862    2973 out.go:298] Setting JSON to false
	I0213 15:02:08.633775    2973 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1750,"bootTime":1707863578,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:02:08.633844    2973 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:02:08.639946    2973 out.go:177] * [test-preload-959000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:02:08.646843    2973 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:02:08.650725    2973 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:02:08.646877    2973 notify.go:220] Checking for updates...
	I0213 15:02:08.656820    2973 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:02:08.660697    2973 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:02:08.663907    2973 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:02:08.666861    2973 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:02:08.670270    2973 config.go:182] Loaded profile config "multinode-078000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:02:08.670324    2973 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:02:08.674812    2973 out.go:177] * Using the qemu2 driver based on user configuration
	I0213 15:02:08.681853    2973 start.go:298] selected driver: qemu2
	I0213 15:02:08.681858    2973 start.go:902] validating driver "qemu2" against <nil>
	I0213 15:02:08.681863    2973 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:02:08.684121    2973 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:02:08.686782    2973 out.go:177] * Automatically selected the socket_vmnet network
	I0213 15:02:08.689896    2973 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:02:08.689934    2973 cni.go:84] Creating CNI manager for ""
	I0213 15:02:08.689942    2973 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:02:08.689947    2973 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 15:02:08.689953    2973 start_flags.go:321] config:
	{Name:test-preload-959000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-959000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs:}
	I0213 15:02:08.694799    2973 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:02:08.701819    2973 out.go:177] * Starting control plane node test-preload-959000 in cluster test-preload-959000
	I0213 15:02:08.705862    2973 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0213 15:02:08.705975    2973 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/test-preload-959000/config.json ...
	I0213 15:02:08.705992    2973 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/test-preload-959000/config.json: {Name:mkdbbb648bc991804a0e789ff9f6b2a7d064f688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:02:08.706000    2973 cache.go:107] acquiring lock: {Name:mkd2c193926e7a95476bbdf7d96957c2d4298fae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:02:08.706016    2973 cache.go:107] acquiring lock: {Name:mkf65f45b52f880793fcf71b30a8150cfb022de3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:02:08.706015    2973 cache.go:107] acquiring lock: {Name:mkb5d3d2f7a357e214d308c00b009b3f725bb940 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:02:08.706077    2973 cache.go:107] acquiring lock: {Name:mkb3108a174db4a62b310883a7b8ec994465c63a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:02:08.706133    2973 cache.go:107] acquiring lock: {Name:mk2af12f47b7b33f16523e758e2512d2bc9c3321 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:02:08.706191    2973 cache.go:107] acquiring lock: {Name:mkc75bb25284e77f520cece7d12f8c2316d783e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:02:08.706206    2973 cache.go:107] acquiring lock: {Name:mka5954836c761a0c09b46737773dfb14fd88bf9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:02:08.706233    2973 cache.go:107] acquiring lock: {Name:mkd946cd1c5ca9c451ed956f911a9faf2f057416 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:02:08.706511    2973 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0213 15:02:08.706251    2973 start.go:365] acquiring machines lock for test-preload-959000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:02:08.706513    2973 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0213 15:02:08.706517    2973 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:02:08.706568    2973 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0213 15:02:08.706573    2973 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0213 15:02:08.706619    2973 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0213 15:02:08.706676    2973 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0213 15:02:08.706690    2973 start.go:369] acquired machines lock for "test-preload-959000" in 166.667µs
	I0213 15:02:08.706701    2973 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0213 15:02:08.706709    2973 start.go:93] Provisioning new machine with config: &{Name:test-preload-959000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-959000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:02:08.706789    2973 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:02:08.715696    2973 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0213 15:02:08.720306    2973 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0213 15:02:08.721151    2973 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0213 15:02:08.721346    2973 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0213 15:02:08.721342    2973 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:02:08.723508    2973 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0213 15:02:08.723606    2973 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0213 15:02:08.723644    2973 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0213 15:02:08.723827    2973 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0213 15:02:08.735265    2973 start.go:159] libmachine.API.Create for "test-preload-959000" (driver="qemu2")
	I0213 15:02:08.735289    2973 client.go:168] LocalClient.Create starting
	I0213 15:02:08.735356    2973 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:02:08.735391    2973 main.go:141] libmachine: Decoding PEM data...
	I0213 15:02:08.735401    2973 main.go:141] libmachine: Parsing certificate...
	I0213 15:02:08.735442    2973 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:02:08.735473    2973 main.go:141] libmachine: Decoding PEM data...
	I0213 15:02:08.735480    2973 main.go:141] libmachine: Parsing certificate...
	I0213 15:02:08.735857    2973 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:02:08.857394    2973 main.go:141] libmachine: Creating SSH key...
	I0213 15:02:08.931284    2973 main.go:141] libmachine: Creating Disk image...
	I0213 15:02:08.931301    2973 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:02:08.931469    2973 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/test-preload-959000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/test-preload-959000/disk.qcow2
	I0213 15:02:08.944548    2973 main.go:141] libmachine: STDOUT: 
	I0213 15:02:08.944570    2973 main.go:141] libmachine: STDERR: 
	I0213 15:02:08.944622    2973 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/test-preload-959000/disk.qcow2 +20000M
	I0213 15:02:08.956523    2973 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:02:08.956551    2973 main.go:141] libmachine: STDERR: 
	I0213 15:02:08.956568    2973 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/test-preload-959000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/test-preload-959000/disk.qcow2
	I0213 15:02:08.956573    2973 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:02:08.956603    2973 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/test-preload-959000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/test-preload-959000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/test-preload-959000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:cc:f5:cc:b3:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/test-preload-959000/disk.qcow2
	I0213 15:02:08.958431    2973 main.go:141] libmachine: STDOUT: 
	I0213 15:02:08.958454    2973 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:02:08.958476    2973 client.go:171] LocalClient.Create took 223.187959ms
	I0213 15:02:10.810229    2973 cache.go:162] opening:  /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0213 15:02:10.862034    2973 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0213 15:02:10.862129    2973 cache.go:162] opening:  /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0213 15:02:10.911961    2973 cache.go:162] opening:  /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0213 15:02:10.951286    2973 cache.go:162] opening:  /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0213 15:02:10.951515    2973 cache.go:162] opening:  /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0213 15:02:10.959152    2973 start.go:128] duration metric: createHost completed in 2.252413875s
	I0213 15:02:10.959196    2973 start.go:83] releasing machines lock for "test-preload-959000", held for 2.252561291s
	W0213 15:02:10.959248    2973 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:02:10.974092    2973 out.go:177] * Deleting "test-preload-959000" in qemu2 ...
	I0213 15:02:10.962121    2973 cache.go:162] opening:  /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0213 15:02:10.968174    2973 cache.go:162] opening:  /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	W0213 15:02:10.998269    2973 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:02:10.998310    2973 start.go:709] Will try again in 5 seconds ...
	I0213 15:02:11.137302    2973 cache.go:157] /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0213 15:02:11.137403    2973 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.431411375s
	I0213 15:02:11.137439    2973 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0213 15:02:11.455684    2973 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0213 15:02:11.455784    2973 cache.go:162] opening:  /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0213 15:02:13.118258    2973 cache.go:157] /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0213 15:02:13.118341    2973 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 4.412268375s
	I0213 15:02:13.118376    2973 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0213 15:02:13.429163    2973 cache.go:157] /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0213 15:02:13.429207    2973 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 4.723352166s
	I0213 15:02:13.429246    2973 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0213 15:02:13.467247    2973 cache.go:157] /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0213 15:02:13.467284    2973 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.761316708s
	I0213 15:02:13.467308    2973 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0213 15:02:13.819636    2973 cache.go:157] /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0213 15:02:13.819691    2973 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.113835s
	I0213 15:02:13.819757    2973 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0213 15:02:15.627809    2973 cache.go:157] /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0213 15:02:15.627860    2973 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.921874917s
	I0213 15:02:15.627897    2973 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0213 15:02:15.694635    2973 cache.go:157] /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0213 15:02:15.694678    2973 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 6.988887959s
	I0213 15:02:15.694710    2973 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0213 15:02:15.998423    2973 start.go:365] acquiring machines lock for test-preload-959000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:02:15.998818    2973 start.go:369] acquired machines lock for "test-preload-959000" in 320.708µs
	I0213 15:02:15.998877    2973 start.go:93] Provisioning new machine with config: &{Name:test-preload-959000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-959000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:02:15.999142    2973 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:02:16.010841    2973 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0213 15:02:16.060422    2973 start.go:159] libmachine.API.Create for "test-preload-959000" (driver="qemu2")
	I0213 15:02:16.060462    2973 client.go:168] LocalClient.Create starting
	I0213 15:02:16.060680    2973 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:02:16.060766    2973 main.go:141] libmachine: Decoding PEM data...
	I0213 15:02:16.060791    2973 main.go:141] libmachine: Parsing certificate...
	I0213 15:02:16.060864    2973 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:02:16.060916    2973 main.go:141] libmachine: Decoding PEM data...
	I0213 15:02:16.060928    2973 main.go:141] libmachine: Parsing certificate...
	I0213 15:02:16.061460    2973 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:02:16.193174    2973 main.go:141] libmachine: Creating SSH key...
	I0213 15:02:16.305293    2973 main.go:141] libmachine: Creating Disk image...
	I0213 15:02:16.305299    2973 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:02:16.305494    2973 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/test-preload-959000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/test-preload-959000/disk.qcow2
	I0213 15:02:16.318109    2973 main.go:141] libmachine: STDOUT: 
	I0213 15:02:16.318130    2973 main.go:141] libmachine: STDERR: 
	I0213 15:02:16.318196    2973 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/test-preload-959000/disk.qcow2 +20000M
	I0213 15:02:16.329512    2973 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:02:16.329528    2973 main.go:141] libmachine: STDERR: 
	I0213 15:02:16.329540    2973 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/test-preload-959000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/test-preload-959000/disk.qcow2
	I0213 15:02:16.329543    2973 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:02:16.329602    2973 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/test-preload-959000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/test-preload-959000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/test-preload-959000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:3a:68:31:0a:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/test-preload-959000/disk.qcow2
	I0213 15:02:16.331472    2973 main.go:141] libmachine: STDOUT: 
	I0213 15:02:16.331492    2973 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:02:16.331506    2973 client.go:171] LocalClient.Create took 271.047958ms
	I0213 15:02:18.331740    2973 start.go:128] duration metric: createHost completed in 2.332619375s
	I0213 15:02:18.331827    2973 start.go:83] releasing machines lock for "test-preload-959000", held for 2.333055042s
	W0213 15:02:18.332163    2973 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-959000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-959000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:02:18.346601    2973 out.go:177] 
	W0213 15:02:18.350684    2973 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:02:18.350713    2973 out.go:239] * 
	* 
	W0213 15:02:18.353374    2973 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:02:18.359574    2973 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-959000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:523: *** TestPreload FAILED at 2024-02-13 15:02:18.378694 -0800 PST m=+1414.308468876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-959000 -n test-preload-959000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-959000 -n test-preload-959000: exit status 7 (68.622459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-959000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-959000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-959000
--- FAIL: TestPreload (9.96s)

                                                
                                    
x
+
TestScheduledStopUnix (9.89s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-534000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-534000 --memory=2048 --driver=qemu2 : exit status 80 (9.708447s)

                                                
                                                
-- stdout --
	* [scheduled-stop-534000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-534000 in cluster scheduled-stop-534000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-534000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-534000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-534000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-534000 in cluster scheduled-stop-534000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-534000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-534000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:523: *** TestScheduledStopUnix FAILED at 2024-02-13 15:02:28.256437 -0800 PST m=+1424.186512709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-534000 -n scheduled-stop-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-534000 -n scheduled-stop-534000: exit status 7 (74.500958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-534000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-534000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-534000
--- FAIL: TestScheduledStopUnix (9.89s)

                                                
                                    
x
+
TestSkaffold (17.79s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2468615813 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2468615813 version: (1.375712041s)
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-931000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-931000 --memory=2600 --driver=qemu2 : exit status 80 (9.733994583s)

                                                
                                                
-- stdout --
	* [skaffold-931000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-931000 in cluster skaffold-931000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-931000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-931000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-931000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-931000 in cluster skaffold-931000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-931000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-931000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:523: *** TestSkaffold FAILED at 2024-02-13 15:02:46.056913 -0800 PST m=+1441.987530918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-931000 -n skaffold-931000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-931000 -n skaffold-931000: exit status 7 (62.946208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-931000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-931000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-931000
--- FAIL: TestSkaffold (17.79s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (656.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2096281342 start -p running-upgrade-781000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2096281342 start -p running-upgrade-781000 --memory=2200 --vm-driver=qemu2 : (1m20.880024042s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-781000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0213 15:04:58.005690    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt: no such file or directory
E0213 15:05:03.205762    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt: no such file or directory
E0213 15:05:30.919939    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt: no such file or directory
E0213 15:06:08.468148    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-781000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m56.197980292s)

                                                
                                                
-- stdout --
	* [running-upgrade-781000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting control plane node running-upgrade-781000 in cluster running-upgrade-781000
	* Updating the running qemu2 "running-upgrade-781000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Verifying Kubernetes components...
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:04:57.455998    3378 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:04:57.456141    3378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:04:57.456144    3378 out.go:304] Setting ErrFile to fd 2...
	I0213 15:04:57.456147    3378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:04:57.456284    3378 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:04:57.457357    3378 out.go:298] Setting JSON to false
	I0213 15:04:57.475095    3378 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1919,"bootTime":1707863578,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:04:57.475165    3378 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:04:57.479641    3378 out.go:177] * [running-upgrade-781000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:04:57.487636    3378 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:04:57.492544    3378 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:04:57.487683    3378 notify.go:220] Checking for updates...
	I0213 15:04:57.498553    3378 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:04:57.501592    3378 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:04:57.504556    3378 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:04:57.507554    3378 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:04:57.510881    3378 config.go:182] Loaded profile config "running-upgrade-781000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0213 15:04:57.514534    3378 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0213 15:04:57.517603    3378 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:04:57.520556    3378 out.go:177] * Using the qemu2 driver based on existing profile
	I0213 15:04:57.527544    3378 start.go:298] selected driver: qemu2
	I0213 15:04:57.527549    3378 start.go:902] validating driver "qemu2" against &{Name:running-upgrade-781000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50143 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:run
ning-upgrade-781000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:04:57.527629    3378 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:04:57.530383    3378 cni.go:84] Creating CNI manager for ""
	I0213 15:04:57.530401    3378 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:04:57.530407    3378 start_flags.go:321] config:
	{Name:running-upgrade-781000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50143 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-781000 Namespace:default APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:04:57.530496    3378 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:04:57.534534    3378 out.go:177] * Starting control plane node running-upgrade-781000 in cluster running-upgrade-781000
	I0213 15:04:57.542592    3378 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0213 15:04:57.542613    3378 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0213 15:04:57.542619    3378 cache.go:56] Caching tarball of preloaded images
	I0213 15:04:57.542675    3378 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:04:57.542681    3378 cache.go:59] Finished verifying existence of preloaded tar for  v1.24.1 on docker
	I0213 15:04:57.542733    3378 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/running-upgrade-781000/config.json ...
	I0213 15:04:57.543038    3378 start.go:365] acquiring machines lock for running-upgrade-781000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:04:57.543065    3378 start.go:369] acquired machines lock for "running-upgrade-781000" in 21.25µs
	I0213 15:04:57.543074    3378 start.go:96] Skipping create...Using existing machine configuration
	I0213 15:04:57.543080    3378 fix.go:54] fixHost starting: 
	I0213 15:04:57.543809    3378 fix.go:102] recreateIfNeeded on running-upgrade-781000: state=Running err=<nil>
	W0213 15:04:57.543818    3378 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 15:04:57.551595    3378 out.go:177] * Updating the running qemu2 "running-upgrade-781000" VM ...
	I0213 15:04:57.555523    3378 machine.go:88] provisioning docker machine ...
	I0213 15:04:57.555533    3378 buildroot.go:166] provisioning hostname "running-upgrade-781000"
	I0213 15:04:57.555563    3378 main.go:141] libmachine: Using SSH client type: native
	I0213 15:04:57.555847    3378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e7b8e0] 0x102e7e050 <nil>  [] 0s} localhost 50111 <nil> <nil>}
	I0213 15:04:57.555855    3378 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-781000 && echo "running-upgrade-781000" | sudo tee /etc/hostname
	I0213 15:04:57.631300    3378 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-781000
	
	I0213 15:04:57.631347    3378 main.go:141] libmachine: Using SSH client type: native
	I0213 15:04:57.631610    3378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e7b8e0] 0x102e7e050 <nil>  [] 0s} localhost 50111 <nil> <nil>}
	I0213 15:04:57.631619    3378 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-781000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-781000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-781000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 15:04:57.700178    3378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 15:04:57.700189    3378 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18170-979/.minikube CaCertPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18170-979/.minikube}
	I0213 15:04:57.700196    3378 buildroot.go:174] setting up certificates
	I0213 15:04:57.700200    3378 provision.go:83] configureAuth start
	I0213 15:04:57.700204    3378 provision.go:138] copyHostCerts
	I0213 15:04:57.700264    3378 exec_runner.go:144] found /Users/jenkins/minikube-integration/18170-979/.minikube/ca.pem, removing ...
	I0213 15:04:57.700269    3378 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18170-979/.minikube/ca.pem
	I0213 15:04:57.700378    3378 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18170-979/.minikube/ca.pem (1078 bytes)
	I0213 15:04:57.700564    3378 exec_runner.go:144] found /Users/jenkins/minikube-integration/18170-979/.minikube/cert.pem, removing ...
	I0213 15:04:57.700567    3378 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18170-979/.minikube/cert.pem
	I0213 15:04:57.700607    3378 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18170-979/.minikube/cert.pem (1123 bytes)
	I0213 15:04:57.700724    3378 exec_runner.go:144] found /Users/jenkins/minikube-integration/18170-979/.minikube/key.pem, removing ...
	I0213 15:04:57.700727    3378 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18170-979/.minikube/key.pem
	I0213 15:04:57.700761    3378 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18170-979/.minikube/key.pem (1675 bytes)
	I0213 15:04:57.700851    3378 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18170-979/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-781000 san=[127.0.0.1 localhost localhost 127.0.0.1 minikube running-upgrade-781000]
	I0213 15:04:57.765480    3378 provision.go:172] copyRemoteCerts
	I0213 15:04:57.765520    3378 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 15:04:57.765529    3378 sshutil.go:53] new ssh client: &{IP:localhost Port:50111 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/running-upgrade-781000/id_rsa Username:docker}
	I0213 15:04:57.801668    3378 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 15:04:57.811566    3378 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0213 15:04:57.818347    3378 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 15:04:57.825217    3378 provision.go:86] duration metric: configureAuth took 125.015708ms
	I0213 15:04:57.825226    3378 buildroot.go:189] setting minikube options for container-runtime
	I0213 15:04:57.825341    3378 config.go:182] Loaded profile config "running-upgrade-781000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0213 15:04:57.825375    3378 main.go:141] libmachine: Using SSH client type: native
	I0213 15:04:57.825593    3378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e7b8e0] 0x102e7e050 <nil>  [] 0s} localhost 50111 <nil> <nil>}
	I0213 15:04:57.825598    3378 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0213 15:04:57.892013    3378 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0213 15:04:57.892022    3378 buildroot.go:70] root file system type: tmpfs
	I0213 15:04:57.892078    3378 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0213 15:04:57.892124    3378 main.go:141] libmachine: Using SSH client type: native
	I0213 15:04:57.892357    3378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e7b8e0] 0x102e7e050 <nil>  [] 0s} localhost 50111 <nil> <nil>}
	I0213 15:04:57.892395    3378 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0213 15:04:57.964652    3378 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0213 15:04:57.964709    3378 main.go:141] libmachine: Using SSH client type: native
	I0213 15:04:57.964938    3378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e7b8e0] 0x102e7e050 <nil>  [] 0s} localhost 50111 <nil> <nil>}
	I0213 15:04:57.964952    3378 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0213 15:04:58.033161    3378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 15:04:58.033172    3378 machine.go:91] provisioned docker machine in 477.65775ms
	I0213 15:04:58.033177    3378 start.go:300] post-start starting for "running-upgrade-781000" (driver="qemu2")
	I0213 15:04:58.033183    3378 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 15:04:58.033238    3378 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 15:04:58.033247    3378 sshutil.go:53] new ssh client: &{IP:localhost Port:50111 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/running-upgrade-781000/id_rsa Username:docker}
	I0213 15:04:58.069889    3378 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 15:04:58.071376    3378 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 15:04:58.071384    3378 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18170-979/.minikube/addons for local assets ...
	I0213 15:04:58.071452    3378 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18170-979/.minikube/files for local assets ...
	I0213 15:04:58.071544    3378 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem -> 14072.pem in /etc/ssl/certs
	I0213 15:04:58.071641    3378 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 15:04:58.074291    3378 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem --> /etc/ssl/certs/14072.pem (1708 bytes)
	I0213 15:04:58.081581    3378 start.go:303] post-start completed in 48.400583ms
	I0213 15:04:58.081587    3378 fix.go:56] fixHost completed within 538.525208ms
	I0213 15:04:58.081616    3378 main.go:141] libmachine: Using SSH client type: native
	I0213 15:04:58.081842    3378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e7b8e0] 0x102e7e050 <nil>  [] 0s} localhost 50111 <nil> <nil>}
	I0213 15:04:58.081849    3378 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0213 15:04:58.147973    3378 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707865497.882883348
	
	I0213 15:04:58.147981    3378 fix.go:206] guest clock: 1707865497.882883348
	I0213 15:04:58.147985    3378 fix.go:219] Guest: 2024-02-13 15:04:57.882883348 -0800 PST Remote: 2024-02-13 15:04:58.081588 -0800 PST m=+0.648154584 (delta=-198.704652ms)
	I0213 15:04:58.147998    3378 fix.go:190] guest clock delta is within tolerance: -198.704652ms
	I0213 15:04:58.148000    3378 start.go:83] releasing machines lock for "running-upgrade-781000", held for 604.949792ms
	I0213 15:04:58.148059    3378 ssh_runner.go:195] Run: cat /version.json
	I0213 15:04:58.148072    3378 sshutil.go:53] new ssh client: &{IP:localhost Port:50111 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/running-upgrade-781000/id_rsa Username:docker}
	I0213 15:04:58.148059    3378 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 15:04:58.148113    3378 sshutil.go:53] new ssh client: &{IP:localhost Port:50111 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/running-upgrade-781000/id_rsa Username:docker}
	W0213 15:04:58.148776    3378 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50111: connect: connection refused
	I0213 15:04:58.148799    3378 retry.go:31] will retry after 333.390546ms: dial tcp [::1]:50111: connect: connection refused
	W0213 15:04:58.527658    3378 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0213 15:04:58.527804    3378 ssh_runner.go:195] Run: systemctl --version
	I0213 15:04:58.530903    3378 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 15:04:58.533690    3378 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 15:04:58.533737    3378 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0213 15:04:58.538573    3378 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0213 15:04:58.544993    3378 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 15:04:58.545009    3378 start.go:475] detecting cgroup driver to use...
	I0213 15:04:58.545125    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 15:04:58.552293    3378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0213 15:04:58.555750    3378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 15:04:58.558939    3378 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 15:04:58.558971    3378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 15:04:58.562584    3378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 15:04:58.566428    3378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 15:04:58.569536    3378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 15:04:58.572359    3378 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 15:04:58.575159    3378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 15:04:58.578178    3378 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 15:04:58.581258    3378 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 15:04:58.584104    3378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:04:58.679401    3378 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 15:04:58.688101    3378 start.go:475] detecting cgroup driver to use...
	I0213 15:04:58.688163    3378 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 15:04:58.693942    3378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 15:04:58.698431    3378 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 15:04:58.705088    3378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 15:04:58.709587    3378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 15:04:58.714020    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 15:04:58.719493    3378 ssh_runner.go:195] Run: which cri-dockerd
	I0213 15:04:58.720666    3378 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 15:04:58.723044    3378 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 15:04:58.728047    3378 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 15:04:58.823767    3378 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 15:04:58.913307    3378 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 15:04:58.913382    3378 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 15:04:58.918763    3378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:04:59.003324    3378 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 15:05:02.668214    3378 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.664984209s)
	I0213 15:05:02.668281    3378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0213 15:05:02.673031    3378 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0213 15:05:02.679712    3378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 15:05:02.684267    3378 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0213 15:05:02.764055    3378 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0213 15:05:02.842975    3378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:05:02.927462    3378 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0213 15:05:02.933858    3378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 15:05:02.938956    3378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:05:03.019818    3378 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0213 15:05:03.058379    3378 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0213 15:05:03.058448    3378 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0213 15:05:03.060539    3378 start.go:543] Will wait 60s for crictl version
	I0213 15:05:03.060592    3378 ssh_runner.go:195] Run: which crictl
	I0213 15:05:03.061991    3378 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 15:05:03.075833    3378 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0213 15:05:03.075907    3378 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 15:05:03.088214    3378 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 15:05:03.105117    3378 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0213 15:05:03.105205    3378 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0213 15:05:03.106561    3378 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0213 15:05:03.106601    3378 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 15:05:03.116607    3378 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0213 15:05:03.116616    3378 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0213 15:05:03.116660    3378 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 15:05:03.119714    3378 ssh_runner.go:195] Run: which lz4
	I0213 15:05:03.120980    3378 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0213 15:05:03.122380    3378 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 15:05:03.122392    3378 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0213 15:05:03.829869    3378 docker.go:649] Took 0.708928 seconds to copy over tarball
	I0213 15:05:03.829933    3378 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 15:05:04.983896    3378 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.15398175s)
	I0213 15:05:04.983911    3378 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 15:05:04.999236    3378 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 15:05:05.002215    3378 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0213 15:05:05.007234    3378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:05:05.077477    3378 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 15:05:06.922525    3378 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.845086875s)
	I0213 15:05:06.922806    3378 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 15:05:06.940448    3378 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0213 15:05:06.940459    3378 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0213 15:05:06.940464    3378 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 15:05:06.947983    3378 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0213 15:05:06.948060    3378 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0213 15:05:06.948184    3378 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:05:06.948237    3378 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0213 15:05:06.948286    3378 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0213 15:05:06.948404    3378 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0213 15:05:06.948836    3378 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0213 15:05:06.949117    3378 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0213 15:05:06.958490    3378 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0213 15:05:06.958622    3378 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0213 15:05:06.958710    3378 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:05:06.959463    3378 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0213 15:05:06.959526    3378 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0213 15:05:06.959617    3378 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0213 15:05:06.959592    3378 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0213 15:05:06.959677    3378 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0213 15:05:09.221729    3378 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0213 15:05:09.258843    3378 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0213 15:05:09.258892    3378 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0213 15:05:09.258989    3378 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0213 15:05:09.278580    3378 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0213 15:05:09.289991    3378 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0213 15:05:09.309861    3378 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0213 15:05:09.309884    3378 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0213 15:05:09.309936    3378 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0213 15:05:09.323079    3378 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0213 15:05:09.323490    3378 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0213 15:05:09.335201    3378 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0213 15:05:09.335221    3378 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0213 15:05:09.335272    3378 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0213 15:05:09.345240    3378 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0213 15:05:09.349957    3378 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0213 15:05:09.359672    3378 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0213 15:05:09.359693    3378 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0213 15:05:09.359748    3378 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0213 15:05:09.367010    3378 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0213 15:05:09.370502    3378 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0213 15:05:09.373891    3378 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0213 15:05:09.374012    3378 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0213 15:05:09.374943    3378 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0213 15:05:09.379265    3378 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0213 15:05:09.379284    3378 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0213 15:05:09.379335    3378 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0213 15:05:09.391792    3378 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0213 15:05:09.391814    3378 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0213 15:05:09.391871    3378 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0213 15:05:09.391899    3378 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0213 15:05:09.391906    3378 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0213 15:05:09.391925    3378 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0213 15:05:09.399267    3378 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0213 15:05:09.399380    3378 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0213 15:05:09.408957    3378 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0213 15:05:09.408962    3378 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0213 15:05:09.408984    3378 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0213 15:05:09.409079    3378 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0213 15:05:09.409160    3378 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0213 15:05:09.410856    3378 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0213 15:05:09.410867    3378 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0213 15:05:09.422960    3378 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0213 15:05:09.422973    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0213 15:05:09.465830    3378 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0213 15:05:09.465853    3378 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0213 15:05:09.465859    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0213 15:05:09.508566    3378 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0213 15:05:09.846766    3378 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0213 15:05:09.847281    3378 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:05:09.883867    3378 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0213 15:05:09.883911    3378 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:05:09.884019    3378 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:05:10.495134    3378 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0213 15:05:10.495610    3378 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0213 15:05:10.501156    3378 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0213 15:05:10.501238    3378 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0213 15:05:10.552279    3378 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0213 15:05:10.552293    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0213 15:05:10.786895    3378 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0213 15:05:10.786937    3378 cache_images.go:92] LoadImages completed in 3.846583459s
	W0213 15:05:10.786973    3378 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0213 15:05:10.787045    3378 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 15:05:10.800239    3378 cni.go:84] Creating CNI manager for ""
	I0213 15:05:10.800249    3378 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:05:10.800259    3378 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 15:05:10.800268    3378 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-781000 NodeName:running-upgrade-781000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 15:05:10.800331    3378 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-781000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 15:05:10.800364    3378 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-781000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-781000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 15:05:10.800411    3378 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0213 15:05:10.803201    3378 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 15:05:10.803239    3378 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 15:05:10.805768    3378 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0213 15:05:10.810686    3378 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 15:05:10.815957    3378 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0213 15:05:10.821400    3378 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0213 15:05:10.822677    3378 certs.go:56] Setting up /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/running-upgrade-781000 for IP: 10.0.2.15
	I0213 15:05:10.822685    3378 certs.go:190] acquiring lock for shared ca certs: {Name:mk65e421691b8fb2c09fb65e08f20f9a769da9f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:05:10.822803    3378 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18170-979/.minikube/ca.key
	I0213 15:05:10.822842    3378 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.key
	I0213 15:05:10.822893    3378 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/running-upgrade-781000/client.key
	I0213 15:05:10.822926    3378 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/running-upgrade-781000/apiserver.key.49504c3e
	I0213 15:05:10.822960    3378 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/running-upgrade-781000/proxy-client.key
	I0213 15:05:10.823082    3378 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/1407.pem (1338 bytes)
	W0213 15:05:10.823102    3378 certs.go:433] ignoring /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/1407_empty.pem, impossibly tiny 0 bytes
	I0213 15:05:10.823108    3378 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 15:05:10.823126    3378 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem (1078 bytes)
	I0213 15:05:10.823145    3378 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem (1123 bytes)
	I0213 15:05:10.823164    3378 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/key.pem (1675 bytes)
	I0213 15:05:10.823204    3378 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem (1708 bytes)
	I0213 15:05:10.823544    3378 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/running-upgrade-781000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 15:05:10.830743    3378 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/running-upgrade-781000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0213 15:05:10.838930    3378 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/running-upgrade-781000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 15:05:10.849020    3378 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/running-upgrade-781000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 15:05:10.856805    3378 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 15:05:10.863860    3378 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 15:05:10.871251    3378 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 15:05:10.878977    3378 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0213 15:05:10.886464    3378 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/certs/1407.pem --> /usr/share/ca-certificates/1407.pem (1338 bytes)
	I0213 15:05:10.893707    3378 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem --> /usr/share/ca-certificates/14072.pem (1708 bytes)
	I0213 15:05:10.900412    3378 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 15:05:10.906888    3378 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 15:05:10.911872    3378 ssh_runner.go:195] Run: openssl version
	I0213 15:05:10.913719    3378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1407.pem && ln -fs /usr/share/ca-certificates/1407.pem /etc/ssl/certs/1407.pem"
	I0213 15:05:10.916897    3378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1407.pem
	I0213 15:05:10.918316    3378 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:48 /usr/share/ca-certificates/1407.pem
	I0213 15:05:10.918334    3378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1407.pem
	I0213 15:05:10.920268    3378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1407.pem /etc/ssl/certs/51391683.0"
	I0213 15:05:10.922992    3378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14072.pem && ln -fs /usr/share/ca-certificates/14072.pem /etc/ssl/certs/14072.pem"
	I0213 15:05:10.926353    3378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14072.pem
	I0213 15:05:10.927925    3378 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:48 /usr/share/ca-certificates/14072.pem
	I0213 15:05:10.927950    3378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14072.pem
	I0213 15:05:10.929697    3378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14072.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 15:05:10.932833    3378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 15:05:10.935831    3378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 15:05:10.937324    3378 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 22:40 /usr/share/ca-certificates/minikubeCA.pem
	I0213 15:05:10.937343    3378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 15:05:10.939197    3378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 15:05:10.942034    3378 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 15:05:10.943435    3378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 15:05:10.945132    3378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 15:05:10.947096    3378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 15:05:10.948866    3378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 15:05:10.951073    3378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 15:05:10.952875    3378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 15:05:10.954590    3378 kubeadm.go:404] StartCluster: {Name:running-upgrade-781000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50143 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 Clus
terName:running-upgrade-781000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:05:10.954652    3378 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 15:05:10.965014    3378 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 15:05:10.968663    3378 host.go:66] Checking if "running-upgrade-781000" exists ...
	I0213 15:05:10.969495    3378 main.go:141] libmachine: Using SSH client type: external
	I0213 15:05:10.969515    3378 main.go:141] libmachine: Using SSH private key: /Users/jenkins/minikube-integration/18170-979/.minikube/machines/running-upgrade-781000/id_rsa (-rw-------)
	I0213 15:05:10.969532    3378 main.go:141] libmachine: &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18170-979/.minikube/machines/running-upgrade-781000/id_rsa -p 50111] /usr/bin/ssh <nil>}
	I0213 15:05:10.969547    3378 main.go:141] libmachine: /usr/bin/ssh -F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18170-979/.minikube/machines/running-upgrade-781000/id_rsa -p 50111 -f -NTL 50143:localhost:8443
	I0213 15:05:11.011762    3378 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 15:05:11.011818    3378 kubeadm.go:636] restartCluster start
	I0213 15:05:11.011871    3378 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 15:05:11.016133    3378 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 15:05:11.016389    3378 kubeconfig.go:135] verify returned: extract IP: "running-upgrade-781000" does not appear in /Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:05:11.016438    3378 kubeconfig.go:146] "running-upgrade-781000" context is missing from /Users/jenkins/minikube-integration/18170-979/kubeconfig - will repair!
	I0213 15:05:11.016620    3378 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/kubeconfig: {Name:mkf66d96abab1e512e6f2721c341e70e5b11c9ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:05:11.017011    3378 kapi.go:59] client config for running-upgrade-781000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/running-upgrade-781000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/running-upgrade-781000/client.key", CAFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104157f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 15:05:11.017521    3378 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 15:05:11.020345    3378 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-781000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0213 15:05:11.020350    3378 kubeadm.go:1135] stopping kube-system containers ...
	I0213 15:05:11.020386    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 15:05:11.031643    3378 docker.go:483] Stopping containers: [23529836259f bc19bc859f1e 0cf09ffe8209 80396ef946cf 62fb42b3b949 e46e2621e8f2 a80c8b77c579 6d71f69fc500 8caf827e4484 08b6ca85b0c4 e2ce89c0b03f 2788e86c6006 01408f3d155b e6003fa7cec0]
	I0213 15:05:11.031728    3378 ssh_runner.go:195] Run: docker stop 23529836259f bc19bc859f1e 0cf09ffe8209 80396ef946cf 62fb42b3b949 e46e2621e8f2 a80c8b77c579 6d71f69fc500 8caf827e4484 08b6ca85b0c4 e2ce89c0b03f 2788e86c6006 01408f3d155b e6003fa7cec0
	I0213 15:05:11.042461    3378 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 15:05:11.139872    3378 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 15:05:11.144497    3378 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Feb 13 23:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Feb 13 23:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Feb 13 23:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Feb 13 23:04 /etc/kubernetes/scheduler.conf
	
	I0213 15:05:11.144536    3378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0213 15:05:11.148361    3378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0213 15:05:11.151903    3378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0213 15:05:11.154878    3378 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0213 15:05:11.154906    3378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0213 15:05:11.157801    3378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0213 15:05:11.160723    3378 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0213 15:05:11.160751    3378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0213 15:05:11.164066    3378 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 15:05:11.166869    3378 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 15:05:11.166875    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 15:05:11.198123    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 15:05:11.471524    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 15:05:11.672817    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 15:05:11.695023    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 15:05:11.715264    3378 api_server.go:52] waiting for apiserver process to appear ...
	I0213 15:05:11.715329    3378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 15:05:12.217423    3378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 15:05:12.717436    3378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 15:05:13.217377    3378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 15:05:13.221885    3378 api_server.go:72] duration metric: took 1.506668167s to wait for apiserver process to appear ...
	I0213 15:05:13.221896    3378 api_server.go:88] waiting for apiserver healthz status ...
	I0213 15:05:13.221906    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:05:18.223965    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:05:18.224010    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:05:23.224391    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:05:23.224476    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:05:28.227182    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:05:28.227227    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:05:33.230752    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:05:33.230795    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:05:38.233823    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:05:38.233898    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:05:43.237156    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:05:43.237256    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:05:48.240449    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:05:48.240528    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:05:53.242858    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:05:53.242929    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:05:58.245944    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:05:58.246040    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:06:03.248644    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:06:03.248687    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:06:08.251250    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:06:08.251328    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:06:13.254105    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:06:13.254540    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:06:13.295460    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:06:13.295634    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:06:13.318077    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:06:13.318205    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:06:13.333027    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:06:13.333093    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:06:13.345388    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:06:13.345469    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:06:13.358649    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:06:13.358717    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:06:13.371985    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:06:13.372068    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:06:13.381829    3378 logs.go:276] 0 containers: []
	W0213 15:06:13.381840    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:06:13.381901    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:06:13.392674    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:06:13.392689    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:06:13.392695    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:06:13.466683    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:06:13.466696    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:06:13.481255    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:06:13.481267    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:06:13.498428    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:06:13.498437    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:06:13.509934    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:06:13.509947    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:06:13.523563    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:06:13.523571    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:06:13.539373    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:06:13.539383    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:06:13.576043    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:06:13.576055    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:06:13.600392    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:06:13.600404    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:06:13.611219    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:06:13.611233    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:06:13.625725    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:06:13.625739    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:06:13.636841    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:06:13.636852    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:06:13.648718    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:06:13.648732    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:06:13.653387    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:06:13.653394    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:06:13.668394    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:06:13.668405    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:06:13.683943    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:06:13.683953    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:06:13.695345    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:06:13.695356    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:06:16.222231    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:06:21.224699    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:06:21.225182    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:06:21.264115    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:06:21.264244    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:06:21.286163    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:06:21.286282    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:06:21.302022    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:06:21.302095    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:06:21.314473    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:06:21.314543    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:06:21.325414    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:06:21.325471    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:06:21.335928    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:06:21.336011    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:06:21.346733    3378 logs.go:276] 0 containers: []
	W0213 15:06:21.346744    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:06:21.346801    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:06:21.357035    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:06:21.357054    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:06:21.357060    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:06:21.373018    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:06:21.373029    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:06:21.397719    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:06:21.397729    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:06:21.434395    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:06:21.434407    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:06:21.470755    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:06:21.470767    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:06:21.485530    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:06:21.485540    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:06:21.497046    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:06:21.497057    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:06:21.508346    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:06:21.508355    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:06:21.512588    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:06:21.512597    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:06:21.536841    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:06:21.536850    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:06:21.548541    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:06:21.548551    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:06:21.564867    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:06:21.564878    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:06:21.581854    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:06:21.581864    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:06:21.594416    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:06:21.594433    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:06:21.609160    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:06:21.609172    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:06:21.624970    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:06:21.624986    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:06:21.637148    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:06:21.637156    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:06:24.156522    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:06:29.159251    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:06:29.159485    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:06:29.189497    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:06:29.189609    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:06:29.207053    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:06:29.207140    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:06:29.220992    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:06:29.221058    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:06:29.232924    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:06:29.232988    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:06:29.243252    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:06:29.243319    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:06:29.253727    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:06:29.253798    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:06:29.263788    3378 logs.go:276] 0 containers: []
	W0213 15:06:29.263801    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:06:29.263857    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:06:29.274498    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:06:29.274512    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:06:29.274518    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:06:29.289288    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:06:29.289297    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:06:29.306644    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:06:29.306653    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:06:29.317951    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:06:29.317965    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:06:29.356570    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:06:29.356578    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:06:29.360760    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:06:29.360770    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:06:29.385164    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:06:29.385174    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:06:29.420751    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:06:29.420764    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:06:29.432419    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:06:29.432431    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:06:29.444274    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:06:29.444288    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:06:29.458787    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:06:29.458801    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:06:29.472597    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:06:29.472609    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:06:29.483718    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:06:29.483728    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:06:29.503000    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:06:29.503013    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:06:29.521522    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:06:29.521530    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:06:29.533545    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:06:29.533555    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:06:29.545481    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:06:29.545494    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:06:32.074129    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:06:37.076472    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:06:37.076867    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:06:37.112567    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:06:37.112699    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:06:37.134757    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:06:37.134858    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:06:37.149969    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:06:37.150035    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:06:37.162784    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:06:37.162857    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:06:37.178189    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:06:37.178245    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:06:37.189062    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:06:37.189131    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:06:37.199791    3378 logs.go:276] 0 containers: []
	W0213 15:06:37.199801    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:06:37.199858    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:06:37.215456    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:06:37.215470    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:06:37.215476    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:06:37.219585    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:06:37.219596    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:06:37.230743    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:06:37.230754    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:06:37.242057    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:06:37.242067    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:06:37.279678    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:06:37.279695    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:06:37.293743    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:06:37.293753    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:06:37.309724    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:06:37.309734    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:06:37.334336    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:06:37.334343    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:06:37.348986    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:06:37.349000    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:06:37.366103    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:06:37.366114    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:06:37.377907    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:06:37.377917    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:06:37.389656    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:06:37.389667    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:06:37.426083    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:06:37.426096    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:06:37.440557    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:06:37.440569    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:06:37.463763    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:06:37.463774    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:06:37.478521    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:06:37.478532    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:06:37.493143    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:06:37.493152    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:06:40.006591    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:06:45.008979    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:06:45.009414    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:06:45.047044    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:06:45.047171    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:06:45.070822    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:06:45.070924    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:06:45.085498    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:06:45.085568    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:06:45.097414    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:06:45.097479    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:06:45.108532    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:06:45.108599    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:06:45.122884    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:06:45.122949    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:06:45.133039    3378 logs.go:276] 0 containers: []
	W0213 15:06:45.133049    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:06:45.133106    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:06:45.143516    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:06:45.143528    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:06:45.143533    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:06:45.157799    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:06:45.157810    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:06:45.172640    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:06:45.172648    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:06:45.184667    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:06:45.184680    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:06:45.221566    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:06:45.221573    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:06:45.245939    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:06:45.245946    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:06:45.257546    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:06:45.257557    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:06:45.268779    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:06:45.268790    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:06:45.273059    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:06:45.273066    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:06:45.308982    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:06:45.308995    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:06:45.332769    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:06:45.332779    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:06:45.348256    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:06:45.348265    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:06:45.364037    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:06:45.364048    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:06:45.376816    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:06:45.376829    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:06:45.390473    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:06:45.390482    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:06:45.401907    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:06:45.401916    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:06:45.413204    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:06:45.413217    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:06:47.932684    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:06:52.935036    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:06:52.935246    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:06:52.954855    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:06:52.954946    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:06:52.969942    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:06:52.970016    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:06:52.982096    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:06:52.982171    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:06:52.992972    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:06:52.993041    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:06:53.003338    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:06:53.003398    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:06:53.013255    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:06:53.013322    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:06:53.023188    3378 logs.go:276] 0 containers: []
	W0213 15:06:53.023196    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:06:53.023243    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:06:53.040253    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:06:53.040266    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:06:53.040283    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:06:53.055210    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:06:53.055220    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:06:53.067483    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:06:53.067493    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:06:53.091398    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:06:53.091405    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:06:53.126076    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:06:53.126087    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:06:53.140704    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:06:53.140722    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:06:53.169233    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:06:53.169244    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:06:53.184047    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:06:53.184057    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:06:53.198351    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:06:53.198363    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:06:53.214475    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:06:53.214484    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:06:53.225848    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:06:53.225862    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:06:53.263945    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:06:53.263953    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:06:53.275065    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:06:53.275076    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:06:53.286905    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:06:53.286917    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:06:53.298584    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:06:53.298596    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:06:53.303444    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:06:53.303454    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:06:53.327024    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:06:53.327039    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:06:55.840287    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:07:00.842495    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:07:00.842681    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:07:00.886228    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:07:00.886307    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:07:00.898534    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:07:00.898597    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:07:00.909431    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:07:00.909500    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:07:00.920601    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:07:00.920679    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:07:00.931456    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:07:00.931516    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:07:00.941749    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:07:00.941817    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:07:00.952060    3378 logs.go:276] 0 containers: []
	W0213 15:07:00.952068    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:07:00.952123    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:07:00.962509    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:07:00.962523    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:07:00.962528    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:07:00.973880    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:07:00.973892    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:07:00.988354    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:07:00.988366    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:07:01.003900    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:07:01.003912    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:07:01.022809    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:07:01.022821    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:07:01.037473    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:07:01.037486    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:07:01.077087    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:07:01.077097    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:07:01.091062    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:07:01.091072    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:07:01.116229    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:07:01.116241    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:07:01.130778    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:07:01.130788    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:07:01.143013    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:07:01.143023    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:07:01.167487    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:07:01.167495    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:07:01.171839    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:07:01.171845    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:07:01.187737    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:07:01.187749    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:07:01.204797    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:07:01.204806    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:07:01.215788    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:07:01.215796    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:07:01.229790    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:07:01.229798    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:07:03.765756    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:07:08.767836    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:07:08.767958    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:07:08.780173    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:07:08.780253    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:07:08.792553    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:07:08.792629    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:07:08.805295    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:07:08.805364    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:07:08.816469    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:07:08.816538    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:07:08.828541    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:07:08.828611    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:07:08.844526    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:07:08.844594    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:07:08.854582    3378 logs.go:276] 0 containers: []
	W0213 15:07:08.854594    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:07:08.854647    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:07:08.870160    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:07:08.870175    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:07:08.870180    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:07:08.881675    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:07:08.881687    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:07:08.893497    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:07:08.893506    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:07:08.897675    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:07:08.897682    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:07:08.935976    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:07:08.935987    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:07:08.950794    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:07:08.950804    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:07:08.962020    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:07:08.962031    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:07:08.977173    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:07:08.977191    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:07:08.994054    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:07:08.994064    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:07:09.017190    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:07:09.017200    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:07:09.030817    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:07:09.030832    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:07:09.046616    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:07:09.046626    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:07:09.059611    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:07:09.059621    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:07:09.072259    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:07:09.072268    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:07:09.083420    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:07:09.083430    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:07:09.120223    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:07:09.120229    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:07:09.134259    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:07:09.134268    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:07:11.661500    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:07:16.663787    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:07:16.663963    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:07:16.675478    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:07:16.675557    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:07:16.687626    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:07:16.687694    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:07:16.704359    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:07:16.704425    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:07:16.717841    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:07:16.717930    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:07:16.729635    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:07:16.729705    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:07:16.741326    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:07:16.741395    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:07:16.760717    3378 logs.go:276] 0 containers: []
	W0213 15:07:16.760727    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:07:16.760781    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:07:16.772644    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:07:16.772659    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:07:16.772666    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:07:16.777549    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:07:16.777559    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:07:16.792376    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:07:16.792388    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:07:16.809172    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:07:16.809186    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:07:16.821517    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:07:16.821530    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:07:16.833754    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:07:16.833767    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:07:16.846075    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:07:16.846087    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:07:16.866660    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:07:16.866675    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:07:16.878683    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:07:16.878694    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:07:16.919095    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:07:16.919111    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:07:16.960315    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:07:16.960328    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:07:16.984147    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:07:16.984158    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:07:16.998587    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:07:16.998597    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:07:17.009976    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:07:17.009989    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:07:17.024220    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:07:17.024232    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:07:17.041300    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:07:17.041311    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:07:17.055830    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:07:17.055840    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:07:19.583907    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:07:24.586123    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:07:24.586308    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:07:24.597949    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:07:24.598025    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:07:24.608650    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:07:24.608729    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:07:24.619520    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:07:24.619590    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:07:24.630524    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:07:24.630604    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:07:24.642086    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:07:24.642151    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:07:24.652802    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:07:24.652878    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:07:24.663513    3378 logs.go:276] 0 containers: []
	W0213 15:07:24.663524    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:07:24.663581    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:07:24.674311    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:07:24.674326    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:07:24.674332    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:07:24.712474    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:07:24.712489    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:07:24.725396    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:07:24.725407    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:07:24.765131    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:07:24.765144    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:07:24.769581    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:07:24.769588    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:07:24.783590    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:07:24.783603    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:07:24.809492    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:07:24.809501    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:07:24.828463    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:07:24.828473    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:07:24.839967    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:07:24.839979    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:07:24.851828    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:07:24.851839    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:07:24.869301    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:07:24.869312    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:07:24.880911    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:07:24.880922    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:07:24.893662    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:07:24.893675    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:07:24.919306    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:07:24.919317    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:07:24.934198    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:07:24.934209    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:07:24.953161    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:07:24.953172    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:07:24.968567    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:07:24.968579    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:07:27.481066    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:07:32.483535    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:07:32.483904    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:07:32.519267    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:07:32.519416    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:07:32.540104    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:07:32.540211    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:07:32.555396    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:07:32.555494    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:07:32.568736    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:07:32.568810    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:07:32.579943    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:07:32.580019    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:07:32.591572    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:07:32.591642    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:07:32.603293    3378 logs.go:276] 0 containers: []
	W0213 15:07:32.603305    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:07:32.603379    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:07:32.619593    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:07:32.619607    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:07:32.619612    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:07:32.645392    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:07:32.645410    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:07:32.659437    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:07:32.659452    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:07:32.688817    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:07:32.688828    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:07:32.704219    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:07:32.704231    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:07:32.720930    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:07:32.720940    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:07:32.743725    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:07:32.743736    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:07:32.762606    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:07:32.762622    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:07:32.775673    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:07:32.775688    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:07:32.780676    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:07:32.780687    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:07:32.795217    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:07:32.795232    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:07:32.807939    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:07:32.807952    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:07:32.820866    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:07:32.820879    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:07:32.832614    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:07:32.832630    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:07:32.854182    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:07:32.854198    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:07:32.867059    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:07:32.867074    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:07:32.909823    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:07:32.909844    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:07:35.450782    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:07:40.453115    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:07:40.453299    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:07:40.476314    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:07:40.476385    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:07:40.488045    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:07:40.488116    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:07:40.498729    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:07:40.498796    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:07:40.509301    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:07:40.509367    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:07:40.519695    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:07:40.519758    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:07:40.530317    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:07:40.530387    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:07:40.540250    3378 logs.go:276] 0 containers: []
	W0213 15:07:40.540262    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:07:40.540320    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:07:40.551134    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:07:40.551147    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:07:40.551153    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:07:40.588616    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:07:40.588624    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:07:40.602517    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:07:40.602528    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:07:40.615039    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:07:40.615051    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:07:40.653680    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:07:40.653693    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:07:40.672100    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:07:40.672115    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:07:40.694958    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:07:40.694972    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:07:40.713587    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:07:40.713601    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:07:40.726857    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:07:40.726871    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:07:40.741720    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:07:40.741731    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:07:40.769274    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:07:40.769294    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:07:40.774507    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:07:40.774519    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:07:40.790856    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:07:40.790869    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:07:40.816902    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:07:40.816918    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:07:40.833493    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:07:40.833508    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:07:40.846872    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:07:40.846886    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:07:40.859217    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:07:40.859231    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:07:43.373656    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:07:48.376192    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:07:48.376523    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:07:48.407183    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:07:48.407310    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:07:48.425756    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:07:48.425850    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:07:48.439166    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:07:48.439244    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:07:48.451122    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:07:48.451197    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:07:48.461822    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:07:48.461883    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:07:48.472340    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:07:48.472411    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:07:48.482476    3378 logs.go:276] 0 containers: []
	W0213 15:07:48.482486    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:07:48.482541    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:07:48.493052    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:07:48.493066    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:07:48.493072    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:07:48.510592    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:07:48.510602    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:07:48.533988    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:07:48.533995    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:07:48.538410    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:07:48.538419    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:07:48.557182    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:07:48.557191    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:07:48.571809    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:07:48.571821    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:07:48.585998    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:07:48.586009    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:07:48.603990    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:07:48.603999    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:07:48.621211    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:07:48.621220    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:07:48.660139    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:07:48.660146    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:07:48.674242    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:07:48.674253    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:07:48.685492    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:07:48.685505    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:07:48.724159    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:07:48.724173    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:07:48.741747    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:07:48.741757    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:07:48.756068    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:07:48.756083    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:07:48.767609    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:07:48.767620    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:07:48.790855    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:07:48.790863    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:07:51.304241    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:07:56.306750    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:07:56.307431    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:07:56.346186    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:07:56.346353    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:07:56.376903    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:07:56.376987    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:07:56.390726    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:07:56.390803    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:07:56.402233    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:07:56.402302    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:07:56.413022    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:07:56.413092    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:07:56.424163    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:07:56.424244    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:07:56.435005    3378 logs.go:276] 0 containers: []
	W0213 15:07:56.435016    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:07:56.435077    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:07:56.445660    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:07:56.445674    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:07:56.445680    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:07:56.459682    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:07:56.459691    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:07:56.470858    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:07:56.470869    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:07:56.495687    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:07:56.495696    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:07:56.533029    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:07:56.533038    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:07:56.544598    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:07:56.544611    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:07:56.556318    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:07:56.556331    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:07:56.580299    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:07:56.580309    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:07:56.597254    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:07:56.597267    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:07:56.609012    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:07:56.609022    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:07:56.627227    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:07:56.627240    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:07:56.666274    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:07:56.666286    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:07:56.679926    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:07:56.679935    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:07:56.694029    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:07:56.694040    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:07:56.709341    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:07:56.709352    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:07:56.723941    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:07:56.723950    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:07:56.735062    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:07:56.735074    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:07:59.239914    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:04.241931    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:04.242044    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:08:04.254337    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:08:04.254410    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:08:04.266933    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:08:04.267010    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:08:04.279566    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:08:04.279652    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:08:04.291745    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:08:04.291829    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:08:04.303567    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:08:04.303635    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:08:04.320388    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:08:04.320470    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:08:04.332625    3378 logs.go:276] 0 containers: []
	W0213 15:08:04.332635    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:08:04.332692    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:08:04.348501    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:08:04.348517    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:08:04.348524    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:08:04.364905    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:08:04.364921    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:08:04.382178    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:08:04.382189    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:08:04.395562    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:08:04.395575    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:08:04.437546    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:08:04.437558    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:08:04.455537    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:08:04.455556    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:08:04.471858    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:08:04.471870    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:08:04.486031    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:08:04.486042    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:08:04.530154    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:08:04.530173    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:08:04.556260    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:08:04.556271    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:08:04.574553    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:08:04.574564    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:08:04.597844    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:08:04.597859    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:08:04.602191    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:08:04.602199    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:08:04.616470    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:08:04.616484    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:08:04.628594    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:08:04.628606    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:08:04.645916    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:08:04.645927    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:08:04.661731    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:08:04.661744    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:08:07.184317    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:12.186698    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:12.187109    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:08:12.226729    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:08:12.226864    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:08:12.248372    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:08:12.248486    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:08:12.271787    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:08:12.271864    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:08:12.283311    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:08:12.283383    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:08:12.293988    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:08:12.294062    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:08:12.304845    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:08:12.304912    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:08:12.315198    3378 logs.go:276] 0 containers: []
	W0213 15:08:12.315209    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:08:12.315268    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:08:12.335172    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:08:12.335188    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:08:12.335194    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:08:12.346262    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:08:12.346272    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:08:12.350441    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:08:12.350449    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:08:12.365464    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:08:12.365475    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:08:12.382920    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:08:12.382929    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:08:12.393753    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:08:12.393764    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:08:12.430376    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:08:12.430389    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:08:12.444893    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:08:12.444903    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:08:12.456596    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:08:12.456608    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:08:12.471405    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:08:12.471417    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:08:12.482999    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:08:12.483010    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:08:12.494722    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:08:12.494732    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:08:12.532712    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:08:12.532721    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:08:12.546519    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:08:12.546529    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:08:12.562871    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:08:12.562882    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:08:12.585692    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:08:12.585700    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:08:12.612681    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:08:12.612691    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:08:15.130489    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:20.132621    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:20.133231    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:08:20.174786    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:08:20.174912    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:08:20.192981    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:08:20.193068    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:08:20.206255    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:08:20.206324    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:08:20.217799    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:08:20.217867    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:08:20.228506    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:08:20.228570    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:08:20.238914    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:08:20.238976    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:08:20.248952    3378 logs.go:276] 0 containers: []
	W0213 15:08:20.248962    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:08:20.249023    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:08:20.259086    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:08:20.259103    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:08:20.259110    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:08:20.295589    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:08:20.295600    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:08:20.310360    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:08:20.310375    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:08:20.335680    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:08:20.335697    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:08:20.353727    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:08:20.353744    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:08:20.370705    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:08:20.370715    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:08:20.411844    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:08:20.411859    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:08:20.423404    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:08:20.423415    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:08:20.427894    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:08:20.427902    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:08:20.439615    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:08:20.439626    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:08:20.450891    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:08:20.450902    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:08:20.474372    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:08:20.474381    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:08:20.485968    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:08:20.485978    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:08:20.500700    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:08:20.500711    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:08:20.521566    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:08:20.521577    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:08:20.536947    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:08:20.536957    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:08:20.554755    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:08:20.554764    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:08:23.068344    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:28.070760    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:28.071182    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:08:28.114382    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:08:28.114518    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:08:28.136835    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:08:28.136954    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:08:28.152234    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:08:28.152313    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:08:28.164691    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:08:28.164767    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:08:28.176596    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:08:28.176658    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:08:28.190491    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:08:28.190567    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:08:28.201225    3378 logs.go:276] 0 containers: []
	W0213 15:08:28.201238    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:08:28.201294    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:08:28.212537    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:08:28.212550    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:08:28.212556    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:08:28.226585    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:08:28.226596    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:08:28.238311    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:08:28.238321    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:08:28.262754    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:08:28.262765    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:08:28.274379    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:08:28.274389    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:08:28.289285    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:08:28.289295    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:08:28.300519    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:08:28.300530    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:08:28.317940    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:08:28.317951    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:08:28.330801    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:08:28.330813    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:08:28.342488    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:08:28.342499    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:08:28.347505    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:08:28.347510    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:08:28.387069    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:08:28.387080    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:08:28.401452    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:08:28.401465    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:08:28.418316    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:08:28.418331    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:08:28.460802    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:08:28.460817    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:08:28.486124    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:08:28.486145    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:08:28.501222    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:08:28.501234    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:08:31.015327    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:36.017514    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:36.017803    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:08:36.048945    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:08:36.049063    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:08:36.075104    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:08:36.075193    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:08:36.087432    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:08:36.087505    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:08:36.104262    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:08:36.104334    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:08:36.115214    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:08:36.115286    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:08:36.126026    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:08:36.126098    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:08:36.136405    3378 logs.go:276] 0 containers: []
	W0213 15:08:36.136422    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:08:36.136480    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:08:36.146921    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:08:36.146937    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:08:36.146943    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:08:36.151491    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:08:36.151500    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:08:36.165802    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:08:36.165811    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:08:36.181307    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:08:36.181315    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:08:36.193127    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:08:36.193138    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:08:36.205450    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:08:36.205460    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:08:36.219754    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:08:36.219763    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:08:36.231438    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:08:36.231450    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:08:36.255507    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:08:36.255517    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:08:36.266590    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:08:36.266603    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:08:36.282782    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:08:36.282792    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:08:36.297517    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:08:36.297528    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:08:36.321407    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:08:36.321415    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:08:36.358972    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:08:36.358982    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:08:36.394858    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:08:36.394872    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:08:36.407236    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:08:36.407248    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:08:36.428561    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:08:36.428572    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:08:38.942454    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:43.944600    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:43.944699    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:08:43.955826    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:08:43.955891    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:08:43.966713    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:08:43.966779    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:08:43.977141    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:08:43.977208    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:08:43.987849    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:08:43.987924    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:08:43.998387    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:08:43.998450    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:08:44.009674    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:08:44.009745    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:08:44.025067    3378 logs.go:276] 0 containers: []
	W0213 15:08:44.025076    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:08:44.025134    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:08:44.040806    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:08:44.040823    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:08:44.040829    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:08:44.079790    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:08:44.079801    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:08:44.093730    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:08:44.093739    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:08:44.108382    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:08:44.108392    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:08:44.119339    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:08:44.119351    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:08:44.130665    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:08:44.130676    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:08:44.154774    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:08:44.154782    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:08:44.172488    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:08:44.172520    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:08:44.184489    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:08:44.184502    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:08:44.196369    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:08:44.196379    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:08:44.213433    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:08:44.213445    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:08:44.252110    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:08:44.252122    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:08:44.279503    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:08:44.279515    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:08:44.297593    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:08:44.297604    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:08:44.310509    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:08:44.310520    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:08:44.314984    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:08:44.314991    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:08:44.336843    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:08:44.336854    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:08:46.864615    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:51.866517    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:51.866680    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:08:51.878223    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:08:51.878292    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:08:51.892632    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:08:51.892710    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:08:51.903376    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:08:51.903449    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:08:51.914603    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:08:51.914672    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:08:51.925616    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:08:51.925691    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:08:51.936497    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:08:51.936569    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:08:51.947012    3378 logs.go:276] 0 containers: []
	W0213 15:08:51.947028    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:08:51.947080    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:08:51.957096    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:08:51.957114    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:08:51.957120    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:08:51.968406    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:08:51.968417    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:08:51.984593    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:08:51.984607    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:08:51.997065    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:08:51.997080    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:08:52.001748    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:08:52.001757    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:08:52.018278    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:08:52.018289    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:08:52.029819    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:08:52.029829    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:08:52.060741    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:08:52.060755    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:08:52.076369    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:08:52.076380    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:08:52.093336    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:08:52.093347    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:08:52.104965    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:08:52.104976    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:08:52.116008    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:08:52.116019    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:08:52.138219    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:08:52.138228    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:08:52.176532    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:08:52.176540    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:08:52.211575    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:08:52.211586    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:08:52.223449    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:08:52.223462    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:08:52.237379    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:08:52.237390    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:08:54.752398    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:59.753262    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:59.753380    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:08:59.764658    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:08:59.764727    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:08:59.775636    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:08:59.775708    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:08:59.786346    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:08:59.786417    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:08:59.796776    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:08:59.796842    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:08:59.806930    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:08:59.806999    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:08:59.817141    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:08:59.817207    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:08:59.827423    3378 logs.go:276] 0 containers: []
	W0213 15:08:59.827435    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:08:59.827488    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:08:59.838695    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:08:59.838709    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:08:59.838715    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:08:59.863295    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:08:59.863306    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:08:59.877679    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:08:59.877695    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:08:59.892462    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:08:59.892477    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:08:59.910986    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:08:59.911008    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:08:59.931133    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:08:59.931145    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:08:59.943229    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:08:59.943239    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:08:59.947554    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:08:59.947561    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:08:59.959071    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:08:59.959082    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:08:59.973990    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:08:59.973998    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:08:59.986013    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:08:59.986027    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:09:00.024338    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:09:00.024354    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:09:00.059238    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:09:00.059253    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:09:00.071198    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:09:00.071211    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:09:00.094576    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:09:00.094585    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:09:00.109133    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:09:00.109143    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:09:00.120182    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:09:00.120191    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:09:02.633446    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:09:07.635559    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:07.635716    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:09:07.648020    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:09:07.648115    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:09:07.659356    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:09:07.659425    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:09:07.669944    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:09:07.670007    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:09:07.684476    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:09:07.684543    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:09:07.698202    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:09:07.698266    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:09:07.710336    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:09:07.710400    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:09:07.720383    3378 logs.go:276] 0 containers: []
	W0213 15:09:07.720393    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:09:07.720445    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:09:07.731124    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:09:07.731139    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:09:07.731146    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:09:07.742916    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:09:07.742927    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:09:07.782720    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:09:07.782734    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:09:07.787284    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:09:07.787290    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:09:07.812638    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:09:07.812649    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:09:07.827895    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:09:07.827904    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:09:07.845438    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:09:07.845451    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:09:07.863670    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:09:07.863681    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:09:07.877523    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:09:07.877540    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:09:07.889364    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:09:07.889376    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:09:07.904198    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:09:07.904209    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:09:07.915859    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:09:07.915870    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:09:07.939587    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:09:07.939595    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:09:07.974439    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:09:07.974452    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:09:07.989271    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:09:07.989281    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:09:08.006213    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:09:08.006227    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:09:08.018250    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:09:08.018261    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:09:10.532343    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:09:15.534784    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:15.534935    3378 kubeadm.go:640] restartCluster took 4m4.516227292s
	W0213 15:09:15.535091    3378 out.go:239] ! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
	! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
	I0213 15:09:15.535147    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0213 15:09:16.584137    3378 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.048996583s)
	I0213 15:09:16.584207    3378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 15:09:16.589044    3378 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 15:09:16.591755    3378 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 15:09:16.594724    3378 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 15:09:16.594740    3378 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 15:09:16.612161    3378 kubeadm.go:322] [init] Using Kubernetes version: v1.24.1
	I0213 15:09:16.612218    3378 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 15:09:16.659786    3378 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 15:09:16.659844    3378 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 15:09:16.659916    3378 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 15:09:16.708598    3378 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 15:09:16.712793    3378 out.go:204]   - Generating certificates and keys ...
	I0213 15:09:16.712825    3378 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 15:09:16.712862    3378 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 15:09:16.712900    3378 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 15:09:16.712944    3378 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 15:09:16.712980    3378 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 15:09:16.713016    3378 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 15:09:16.713051    3378 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 15:09:16.713083    3378 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 15:09:16.713126    3378 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 15:09:16.713162    3378 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 15:09:16.713187    3378 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 15:09:16.713225    3378 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 15:09:16.801873    3378 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 15:09:16.917161    3378 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 15:09:17.014021    3378 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 15:09:17.111883    3378 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 15:09:17.142112    3378 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 15:09:17.142484    3378 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 15:09:17.142514    3378 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 15:09:17.228035    3378 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 15:09:17.231323    3378 out.go:204]   - Booting up control plane ...
	I0213 15:09:17.231372    3378 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 15:09:17.231409    3378 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 15:09:17.231591    3378 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 15:09:17.231864    3378 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 15:09:17.232725    3378 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 15:09:21.236751    3378 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.003751 seconds
	I0213 15:09:21.236821    3378 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 15:09:21.240085    3378 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 15:09:21.758214    3378 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 15:09:21.758471    3378 kubeadm.go:322] [mark-control-plane] Marking the node running-upgrade-781000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 15:09:22.262732    3378 kubeadm.go:322] [bootstrap-token] Using token: cxpo2j.ezru91fdgin60m1s
	I0213 15:09:22.265412    3378 out.go:204]   - Configuring RBAC rules ...
	I0213 15:09:22.265478    3378 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 15:09:22.265574    3378 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 15:09:22.272752    3378 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 15:09:22.273583    3378 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 15:09:22.274674    3378 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 15:09:22.275417    3378 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 15:09:22.278687    3378 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 15:09:22.449872    3378 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 15:09:22.666903    3378 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 15:09:22.667380    3378 kubeadm.go:322] 
	I0213 15:09:22.667406    3378 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 15:09:22.667409    3378 kubeadm.go:322] 
	I0213 15:09:22.667439    3378 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 15:09:22.667441    3378 kubeadm.go:322] 
	I0213 15:09:22.667479    3378 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 15:09:22.667536    3378 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 15:09:22.667560    3378 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 15:09:22.667583    3378 kubeadm.go:322] 
	I0213 15:09:22.667608    3378 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 15:09:22.667610    3378 kubeadm.go:322] 
	I0213 15:09:22.667634    3378 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 15:09:22.667637    3378 kubeadm.go:322] 
	I0213 15:09:22.667660    3378 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 15:09:22.667703    3378 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 15:09:22.667751    3378 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 15:09:22.667754    3378 kubeadm.go:322] 
	I0213 15:09:22.667802    3378 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 15:09:22.667837    3378 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 15:09:22.667840    3378 kubeadm.go:322] 
	I0213 15:09:22.667878    3378 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token cxpo2j.ezru91fdgin60m1s \
	I0213 15:09:22.667929    3378 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59d33d1eb40ab9fa290fa08077c20062eca217e8d74d3cd8b9b4fd2d5d6aeb8d \
	I0213 15:09:22.667941    3378 kubeadm.go:322] 	--control-plane 
	I0213 15:09:22.667948    3378 kubeadm.go:322] 
	I0213 15:09:22.667985    3378 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 15:09:22.667988    3378 kubeadm.go:322] 
	I0213 15:09:22.668027    3378 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token cxpo2j.ezru91fdgin60m1s \
	I0213 15:09:22.668072    3378 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59d33d1eb40ab9fa290fa08077c20062eca217e8d74d3cd8b9b4fd2d5d6aeb8d 
	I0213 15:09:22.668181    3378 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 15:09:22.668190    3378 cni.go:84] Creating CNI manager for ""
	I0213 15:09:22.668207    3378 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:09:22.671499    3378 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 15:09:22.679447    3378 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 15:09:22.682910    3378 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 15:09:22.687874    3378 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 15:09:22.687922    3378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 15:09:22.687923    3378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=fb52fe04bc8b044b129ef2ff27607d20a9fceb93 minikube.k8s.io/name=running-upgrade-781000 minikube.k8s.io/updated_at=2024_02_13T15_09_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 15:09:22.730911    3378 ops.go:34] apiserver oom_adj: -16
	I0213 15:09:22.730982    3378 kubeadm.go:1088] duration metric: took 43.105584ms to wait for elevateKubeSystemPrivileges.
	I0213 15:09:22.744310    3378 host.go:66] Checking if "running-upgrade-781000" exists ...
	I0213 15:09:22.745368    3378 main.go:141] libmachine: Using SSH client type: external
	I0213 15:09:22.745386    3378 main.go:141] libmachine: Using SSH private key: /Users/jenkins/minikube-integration/18170-979/.minikube/machines/running-upgrade-781000/id_rsa (-rw-------)
	I0213 15:09:22.745401    3378 main.go:141] libmachine: &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18170-979/.minikube/machines/running-upgrade-781000/id_rsa -p 50111] /usr/bin/ssh <nil>}
	I0213 15:09:22.745413    3378 main.go:141] libmachine: /usr/bin/ssh -F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18170-979/.minikube/machines/running-upgrade-781000/id_rsa -p 50111 -f -NTL 50143:localhost:8443
	I0213 15:09:22.787808    3378 kubeadm.go:406] StartCluster complete in 4m11.826482833s
	I0213 15:09:22.787870    3378 settings.go:142] acquiring lock: {Name:mkdd6397441cfaf6d06a74b65d6ddefdb863237c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:09:22.787965    3378 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:09:22.788509    3378 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/kubeconfig: {Name:mkf66d96abab1e512e6f2721c341e70e5b11c9ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:09:22.788883    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 15:09:22.788987    3378 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 15:09:22.789033    3378 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-781000"
	I0213 15:09:22.789048    3378 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-781000"
	W0213 15:09:22.789051    3378 addons.go:243] addon storage-provisioner should already be in state true
	I0213 15:09:22.789074    3378 host.go:66] Checking if "running-upgrade-781000" exists ...
	I0213 15:09:22.789072    3378 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-781000"
	I0213 15:09:22.789083    3378 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-781000"
	I0213 15:09:22.789167    3378 config.go:182] Loaded profile config "running-upgrade-781000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0213 15:09:22.789311    3378 kapi.go:59] client config for running-upgrade-781000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/running-upgrade-781000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/running-upgrade-781000/client.key", CAFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104157f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 15:09:22.790232    3378 kapi.go:59] client config for running-upgrade-781000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/running-upgrade-781000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/running-upgrade-781000/client.key", CAFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104157f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 15:09:22.790374    3378 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-781000"
	W0213 15:09:22.790379    3378 addons.go:243] addon default-storageclass should already be in state true
	I0213 15:09:22.790386    3378 host.go:66] Checking if "running-upgrade-781000" exists ...
	I0213 15:09:22.794329    3378 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:09:22.798407    3378 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 15:09:22.798414    3378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 15:09:22.798423    3378 sshutil.go:53] new ssh client: &{IP:localhost Port:50111 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/running-upgrade-781000/id_rsa Username:docker}
	I0213 15:09:22.799223    3378 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 15:09:22.799228    3378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 15:09:22.799234    3378 sshutil.go:53] new ssh client: &{IP:localhost Port:50111 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/running-upgrade-781000/id_rsa Username:docker}
	I0213 15:09:22.825713    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           10.0.2.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 15:09:22.838588    3378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 15:09:22.903059    3378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 15:09:23.210970    3378 start.go:929] {"host.minikube.internal": 10.0.2.2} host record injected into CoreDNS's ConfigMap
	W0213 15:09:52.789561    3378 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "running-upgrade-781000" context to 1 replicas: non-retryable failure while getting "coredns" deployment scale: Get "https://10.0.2.15:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 10.0.2.15:8443: i/o timeout
	E0213 15:09:52.789578    3378 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://10.0.2.15:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 10.0.2.15:8443: i/o timeout
	I0213 15:09:52.789592    3378 start.go:223] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:09:52.793206    3378 out.go:177] * Verifying Kubernetes components...
	I0213 15:09:52.796868    3378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 15:09:52.802510    3378 api_server.go:52] waiting for apiserver process to appear ...
	I0213 15:09:52.802568    3378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 15:09:52.806777    3378 api_server.go:72] duration metric: took 17.170958ms to wait for apiserver process to appear ...
	I0213 15:09:52.806783    3378 api_server.go:88] waiting for apiserver healthz status ...
	I0213 15:09:52.806790    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0213 15:09:53.224643    3378 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0213 15:09:53.229314    3378 out.go:177] * Enabled addons: storage-provisioner
	I0213 15:09:53.237211    3378 addons.go:505] enable addons completed in 30.448888167s: enabled=[storage-provisioner]
	I0213 15:09:57.808735    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:57.808756    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:02.808859    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:02.808883    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:07.809050    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:07.809086    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:12.809407    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:12.809433    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:17.809855    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:17.809901    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:22.810731    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:22.810747    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:27.811516    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:27.811569    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:32.812726    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:32.812751    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:37.814073    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:37.814095    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:42.815909    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:42.815930    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:47.817986    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:47.818006    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:52.818894    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:52.819013    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:10:52.833129    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:10:52.833215    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:10:52.844557    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:10:52.844640    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:10:52.855198    3378 logs.go:276] 2 containers: [6e5977a9cc40 d447b53b1dd0]
	I0213 15:10:52.855262    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:10:52.866218    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:10:52.866279    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:10:52.876849    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:10:52.876917    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:10:52.887512    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:10:52.887576    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:10:52.897822    3378 logs.go:276] 0 containers: []
	W0213 15:10:52.897832    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:10:52.897884    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:10:52.907925    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:10:52.907944    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:10:52.907949    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:10:52.925536    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:10:52.925546    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:10:52.942477    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:10:52.942490    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:10:52.966424    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:10:52.966433    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:10:53.001118    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:10:53.001128    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:10:53.038370    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:10:53.038383    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:10:53.050160    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:10:53.050170    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:10:53.065372    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:10:53.065382    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:10:53.076468    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:10:53.076478    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:10:53.091248    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:10:53.091259    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:10:53.105884    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:10:53.105894    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:10:53.110433    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:10:53.110440    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:10:53.122567    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:10:53.122578    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:10:55.636182    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:00.638433    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:00.638656    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:00.655853    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:11:00.655936    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:00.675718    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:11:00.675785    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:00.686043    3378 logs.go:276] 2 containers: [6e5977a9cc40 d447b53b1dd0]
	I0213 15:11:00.686114    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:00.724444    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:11:00.724504    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:00.735094    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:11:00.735160    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:00.746148    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:11:00.746210    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:00.757204    3378 logs.go:276] 0 containers: []
	W0213 15:11:00.757216    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:00.757272    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:00.767665    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:11:00.767680    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:11:00.767684    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:11:00.780732    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:11:00.780743    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:11:00.793168    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:11:00.793180    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:11:00.804458    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:11:00.804468    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:11:00.821097    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:00.821113    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:00.860060    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:11:00.860071    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:11:00.872327    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:00.872338    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:00.896411    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:00.896422    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:00.900731    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:11:00.900738    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:11:00.915519    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:11:00.915529    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:11:00.930921    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:11:00.930930    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:11:00.948724    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:11:00.948736    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:00.960672    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:00.960682    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:03.497266    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:08.499474    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:08.499704    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:08.523751    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:11:08.523871    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:08.542374    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:11:08.542459    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:08.555562    3378 logs.go:276] 2 containers: [6e5977a9cc40 d447b53b1dd0]
	I0213 15:11:08.555639    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:08.566132    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:11:08.566198    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:08.576158    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:11:08.576224    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:08.586482    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:11:08.586549    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:08.596398    3378 logs.go:276] 0 containers: []
	W0213 15:11:08.596407    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:08.596478    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:08.607598    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:11:08.607617    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:08.607622    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:08.644238    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:11:08.644248    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:11:08.658331    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:11:08.658341    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:11:08.670090    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:11:08.670102    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:11:08.687454    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:11:08.687465    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:11:08.700157    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:08.700169    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:08.704474    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:08.704481    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:08.738625    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:11:08.738637    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:11:08.752272    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:11:08.752283    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:08.763441    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:11:08.763454    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:11:08.775896    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:11:08.775907    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:11:08.789893    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:11:08.789904    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:11:08.805137    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:08.805146    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:11.331444    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:16.333652    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:16.333973    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:16.364431    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:11:16.364570    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:16.382755    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:11:16.382849    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:16.396374    3378 logs.go:276] 2 containers: [6e5977a9cc40 d447b53b1dd0]
	I0213 15:11:16.396450    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:16.408271    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:11:16.408344    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:16.418950    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:11:16.419028    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:16.429806    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:11:16.429874    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:16.439829    3378 logs.go:276] 0 containers: []
	W0213 15:11:16.439843    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:16.439901    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:16.450255    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:11:16.450271    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:16.450276    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:16.485565    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:11:16.485573    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:11:16.497435    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:11:16.497446    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:11:16.512891    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:16.512901    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:16.518000    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:11:16.518007    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:11:16.535167    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:11:16.535177    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:11:16.546993    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:11:16.547005    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:11:16.561202    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:11:16.561213    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:11:16.573347    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:11:16.573360    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:11:16.597402    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:11:16.597412    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:16.609417    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:16.609427    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:16.645196    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:11:16.645206    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:11:16.657250    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:16.657260    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:19.184336    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:24.186539    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:24.186775    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:24.201512    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:11:24.201599    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:24.213764    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:11:24.213830    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:24.227944    3378 logs.go:276] 2 containers: [6e5977a9cc40 d447b53b1dd0]
	I0213 15:11:24.228005    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:24.238889    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:11:24.238966    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:24.249355    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:11:24.249426    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:24.263853    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:11:24.263929    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:24.274852    3378 logs.go:276] 0 containers: []
	W0213 15:11:24.274863    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:24.274920    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:24.287126    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:11:24.287145    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:11:24.287150    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:11:24.299087    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:11:24.299101    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:11:24.311064    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:24.311074    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:24.347103    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:11:24.347112    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:11:24.360781    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:11:24.360791    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:11:24.372580    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:11:24.372592    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:11:24.386389    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:11:24.386399    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:11:24.404557    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:24.404567    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:24.409405    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:11:24.409414    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:11:24.424395    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:11:24.424403    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:24.436248    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:24.436258    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:24.471006    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:11:24.471023    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:11:24.489083    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:24.489093    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:27.015802    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:32.018506    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:32.018881    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:32.050314    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:11:32.050445    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:32.068455    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:11:32.068556    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:32.082520    3378 logs.go:276] 2 containers: [6e5977a9cc40 d447b53b1dd0]
	I0213 15:11:32.082601    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:32.094316    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:11:32.094381    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:32.104836    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:11:32.104904    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:32.115858    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:11:32.115935    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:32.129694    3378 logs.go:276] 0 containers: []
	W0213 15:11:32.129705    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:32.129764    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:32.140806    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:11:32.140832    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:11:32.140837    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:11:32.152993    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:11:32.153004    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:11:32.165025    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:11:32.165035    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:11:32.182572    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:11:32.182584    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:11:32.197239    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:11:32.197249    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:11:32.208314    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:11:32.208326    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:32.220645    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:32.220654    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:32.256737    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:11:32.256748    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:11:32.271820    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:32.271830    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:32.295227    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:32.295236    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:32.299398    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:32.299408    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:32.332618    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:11:32.332628    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:11:32.346382    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:11:32.346392    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:11:34.863168    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:39.865315    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:39.865531    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:39.885181    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:11:39.885283    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:39.899616    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:11:39.899687    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:39.912118    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:11:39.912192    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:39.923199    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:11:39.923272    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:39.933519    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:11:39.933582    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:39.944104    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:11:39.944176    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:39.955085    3378 logs.go:276] 0 containers: []
	W0213 15:11:39.955095    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:39.955150    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:39.967536    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:11:39.967559    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:39.967564    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:40.003692    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:11:40.003703    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:11:40.016042    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:11:40.016052    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:11:40.027312    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:11:40.027322    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:11:40.039937    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:11:40.039948    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:11:40.056072    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:40.056082    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:40.060893    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:11:40.060901    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:11:40.074839    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:11:40.074852    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:11:40.086848    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:40.086860    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:40.122804    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:11:40.122813    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:11:40.136935    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:11:40.136947    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:11:40.148990    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:11:40.148999    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:11:40.168001    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:40.168010    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:40.192750    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:11:40.192758    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:40.204720    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:11:40.204731    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:11:42.720025    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:47.722153    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:47.722298    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:47.734561    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:11:47.734643    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:47.751283    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:11:47.751364    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:47.761768    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:11:47.761853    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:47.782066    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:11:47.782133    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:47.792884    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:11:47.792955    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:47.803767    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:11:47.803835    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:47.814476    3378 logs.go:276] 0 containers: []
	W0213 15:11:47.814489    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:47.814546    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:47.833013    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:11:47.833029    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:47.833036    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:47.837659    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:11:47.837665    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:11:47.849333    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:11:47.849342    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:11:47.868041    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:11:47.868053    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:47.880104    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:47.880115    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:47.917415    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:11:47.917426    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:11:47.931317    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:11:47.931328    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:11:47.944984    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:11:47.944996    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:11:47.956723    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:11:47.956733    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:11:47.968763    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:47.968773    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:48.003062    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:11:48.003073    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:11:48.015209    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:11:48.015222    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:11:48.027025    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:11:48.027037    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:11:48.043166    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:48.043178    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:48.067839    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:11:48.067847    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:11:50.584090    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:55.586560    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:55.587011    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:55.628268    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:11:55.628439    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:55.649896    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:11:55.650000    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:55.664936    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:11:55.665012    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:55.677211    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:11:55.677276    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:55.688422    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:11:55.688506    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:55.700834    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:11:55.700898    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:55.713887    3378 logs.go:276] 0 containers: []
	W0213 15:11:55.713901    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:55.713964    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:55.724173    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:11:55.724189    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:11:55.724194    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:11:55.740668    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:11:55.740679    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:11:55.752212    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:11:55.752222    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:11:55.765480    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:11:55.765491    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:11:55.777790    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:11:55.777800    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:11:55.795587    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:55.795597    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:55.829816    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:11:55.829830    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:11:55.844058    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:55.844077    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:55.868328    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:11:55.868338    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:55.880002    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:55.880012    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:55.915570    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:55.915582    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:55.924270    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:11:55.924281    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:11:55.941555    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:11:55.941567    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:11:55.953792    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:11:55.953804    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:11:55.969477    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:11:55.969488    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:11:58.483175    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:12:03.485372    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:12:03.485489    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:12:03.497366    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:12:03.497445    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:12:03.507687    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:12:03.507750    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:12:03.518837    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:12:03.518907    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:12:03.537390    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:12:03.537450    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:12:03.549342    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:12:03.549414    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:12:03.560189    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:12:03.560263    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:12:03.569858    3378 logs.go:276] 0 containers: []
	W0213 15:12:03.569867    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:12:03.569921    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:12:03.580476    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:12:03.580491    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:12:03.580496    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:12:03.615376    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:12:03.615388    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:12:03.630842    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:12:03.630852    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:12:03.645152    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:12:03.645161    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:12:03.661500    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:12:03.661510    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:12:03.680690    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:12:03.680700    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:12:03.705506    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:12:03.705517    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:12:03.717126    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:12:03.717137    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:12:03.721435    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:12:03.721442    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:12:03.760369    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:12:03.760380    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:12:03.772663    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:12:03.772687    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:12:03.784866    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:12:03.784878    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:12:03.797145    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:12:03.797156    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:12:03.814345    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:12:03.814355    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:12:03.826441    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:12:03.826453    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:12:06.339857    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:12:11.341886    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:12:11.342071    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:12:11.362108    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:12:11.362183    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:12:11.375325    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:12:11.375400    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:12:11.386908    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:12:11.386970    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:12:11.397592    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:12:11.397664    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:12:11.408017    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:12:11.408089    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:12:11.419320    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:12:11.419387    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:12:11.429721    3378 logs.go:276] 0 containers: []
	W0213 15:12:11.429731    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:12:11.429789    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:12:11.441053    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:12:11.441073    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:12:11.441079    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:12:11.455231    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:12:11.455242    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:12:11.467396    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:12:11.467407    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:12:11.480100    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:12:11.480111    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:12:11.491180    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:12:11.491190    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:12:11.526647    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:12:11.526656    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:12:11.540716    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:12:11.540726    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:12:11.572306    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:12:11.572319    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:12:11.593977    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:12:11.593992    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:12:11.629185    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:12:11.629196    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:12:11.647422    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:12:11.647434    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:12:11.662017    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:12:11.662028    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:12:11.680791    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:12:11.680801    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:12:11.693607    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:12:11.693617    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:12:11.698354    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:12:11.698362    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:12:14.226086    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:12:19.227478    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:12:19.227686    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:12:19.249164    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:12:19.249259    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:12:19.263541    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:12:19.263622    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:12:19.276839    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:12:19.276905    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:12:19.287909    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:12:19.287980    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:12:19.298278    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:12:19.298347    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:12:19.309276    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:12:19.309345    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:12:19.318716    3378 logs.go:276] 0 containers: []
	W0213 15:12:19.318727    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:12:19.318783    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:12:19.336451    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:12:19.336467    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:12:19.336472    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:12:19.350328    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:12:19.350339    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:12:19.362856    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:12:19.362868    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:12:19.381161    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:12:19.381172    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:12:19.395832    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:12:19.395842    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:12:19.407804    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:12:19.407815    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:12:19.420566    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:12:19.420576    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:12:19.432410    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:12:19.432420    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:12:19.448537    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:12:19.448547    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:12:19.472394    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:12:19.472401    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:12:19.476742    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:12:19.476752    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:12:19.489600    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:12:19.489610    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:12:19.501614    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:12:19.501627    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:12:19.538554    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:12:19.538568    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:12:19.574154    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:12:19.574168    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:12:22.087944    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:12:27.090297    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:12:27.090756    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:12:27.138646    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:12:27.138774    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:12:27.157945    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:12:27.158048    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:12:27.172727    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:12:27.172812    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:12:27.184690    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:12:27.184754    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:12:27.194937    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:12:27.195009    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:12:27.206184    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:12:27.206257    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:12:27.216782    3378 logs.go:276] 0 containers: []
	W0213 15:12:27.216792    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:12:27.216863    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:12:27.231903    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:12:27.231921    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:12:27.231926    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:12:27.249725    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:12:27.249734    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:12:27.274590    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:12:27.274598    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:12:27.310037    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:12:27.310050    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:12:27.322523    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:12:27.322533    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:12:27.337166    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:12:27.337178    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:12:27.349472    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:12:27.349486    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:12:27.361908    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:12:27.361920    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:12:27.377680    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:12:27.377690    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:12:27.389569    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:12:27.389579    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:12:27.425670    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:12:27.425678    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:12:27.440241    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:12:27.440250    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:12:27.452356    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:12:27.452366    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:12:27.463613    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:12:27.463622    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:12:27.468045    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:12:27.468054    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:12:29.986305    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:12:34.988463    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:12:34.988576    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:12:35.007410    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:12:35.007486    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:12:35.017695    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:12:35.017768    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:12:35.028743    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:12:35.028815    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:12:35.039418    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:12:35.039488    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:12:35.049471    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:12:35.049549    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:12:35.060205    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:12:35.060271    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:12:35.074704    3378 logs.go:276] 0 containers: []
	W0213 15:12:35.074715    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:12:35.074775    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:12:35.085237    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:12:35.085251    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:12:35.085257    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:12:35.097175    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:12:35.097186    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:12:35.112719    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:12:35.112728    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:12:35.124773    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:12:35.124783    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:12:35.145296    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:12:35.145309    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:12:35.181559    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:12:35.181568    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:12:35.186214    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:12:35.186220    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:12:35.219467    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:12:35.219481    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:12:35.237875    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:12:35.237886    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:12:35.249497    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:12:35.249508    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:12:35.260886    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:12:35.260898    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:12:35.273007    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:12:35.273019    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:12:35.296656    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:12:35.296663    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:12:35.314063    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:12:35.314073    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:12:35.325725    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:12:35.325734    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:12:37.839325    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:12:42.841520    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:12:42.841712    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:12:42.859271    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:12:42.859359    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:12:42.872608    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:12:42.872689    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:12:42.884732    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:12:42.884806    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:12:42.895645    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:12:42.895714    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:12:42.906335    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:12:42.906405    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:12:42.916672    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:12:42.916739    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:12:42.926935    3378 logs.go:276] 0 containers: []
	W0213 15:12:42.926947    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:12:42.927002    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:12:42.937434    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:12:42.937452    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:12:42.937458    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:12:42.960873    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:12:42.960881    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:12:42.965099    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:12:42.965108    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:12:42.979338    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:12:42.979353    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:12:43.005441    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:12:43.005450    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:12:43.017144    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:12:43.017155    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:12:43.052203    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:12:43.052213    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:12:43.065276    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:12:43.065286    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:12:43.077583    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:12:43.077593    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:12:43.089373    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:12:43.089382    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:12:43.104132    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:12:43.104142    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:12:43.120721    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:12:43.120733    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:12:43.133075    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:12:43.133086    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:12:43.169621    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:12:43.169630    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:12:43.188583    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:12:43.188593    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:12:45.702497    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:12:50.703533    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:12:50.703652    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:12:50.716543    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:12:50.716618    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:12:50.731599    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:12:50.731667    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:12:50.742636    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:12:50.742710    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:12:50.753570    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:12:50.753663    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:12:50.764033    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:12:50.764104    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:12:50.775841    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:12:50.775912    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:12:50.787232    3378 logs.go:276] 0 containers: []
	W0213 15:12:50.787243    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:12:50.787307    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:12:50.806549    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:12:50.806565    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:12:50.806571    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:12:50.811581    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:12:50.811591    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:12:50.826121    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:12:50.826136    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:12:50.838407    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:12:50.838419    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:12:50.865382    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:12:50.865399    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:12:50.903358    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:12:50.903380    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:12:50.917183    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:12:50.917195    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:12:50.930636    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:12:50.930647    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:12:50.943687    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:12:50.943705    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:12:50.962218    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:12:50.962235    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:12:50.975090    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:12:50.975103    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:12:50.990106    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:12:50.990115    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:12:51.002800    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:12:51.002809    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:12:51.039663    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:12:51.039674    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:12:51.054376    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:12:51.054387    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:12:53.567977    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:12:58.570181    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:12:58.570425    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:12:58.589220    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:12:58.589315    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:12:58.603977    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:12:58.604044    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:12:58.615973    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:12:58.616069    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:12:58.626704    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:12:58.626777    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:12:58.638262    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:12:58.638345    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:12:58.648657    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:12:58.648715    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:12:58.659162    3378 logs.go:276] 0 containers: []
	W0213 15:12:58.659175    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:12:58.659231    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:12:58.669558    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:12:58.669575    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:12:58.669580    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:12:58.693209    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:12:58.693219    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:12:58.727554    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:12:58.727564    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:12:58.739435    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:12:58.739444    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:12:58.751539    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:12:58.751549    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:12:58.771031    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:12:58.771045    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:12:58.785245    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:12:58.785256    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:12:58.797404    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:12:58.797414    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:12:58.810125    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:12:58.810138    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:12:58.822189    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:12:58.822202    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:12:58.826513    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:12:58.826522    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:12:58.838265    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:12:58.838276    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:12:58.873392    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:12:58.873402    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:12:58.887718    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:12:58.887727    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:12:58.899937    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:12:58.899946    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:13:01.418225    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:06.418735    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:06.418910    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:13:06.430342    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:13:06.430417    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:13:06.441045    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:13:06.441127    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:13:06.452358    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:13:06.452432    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:13:06.463456    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:13:06.463520    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:13:06.476327    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:13:06.476407    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:13:06.493715    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:13:06.493786    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:13:06.504021    3378 logs.go:276] 0 containers: []
	W0213 15:13:06.504033    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:13:06.504093    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:13:06.515046    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:13:06.515060    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:13:06.515066    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:13:06.527956    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:13:06.527966    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:13:06.541707    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:13:06.541718    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:13:06.553635    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:13:06.553649    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:13:06.558056    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:13:06.558066    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:13:06.591561    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:13:06.591575    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:13:06.606318    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:13:06.606331    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:13:06.618650    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:13:06.618661    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:13:06.636659    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:13:06.636669    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:13:06.647622    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:13:06.647633    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:13:06.662254    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:13:06.662265    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:13:06.674177    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:13:06.674187    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:13:06.690303    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:13:06.690313    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:13:06.731644    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:13:06.731656    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:13:06.748250    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:13:06.748261    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:13:09.273202    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:14.275483    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:14.275738    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:13:14.302175    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:13:14.302303    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:13:14.320298    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:13:14.320372    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:13:14.333579    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:13:14.333653    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:13:14.344625    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:13:14.344694    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:13:14.354942    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:13:14.355005    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:13:14.365551    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:13:14.365615    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:13:14.375792    3378 logs.go:276] 0 containers: []
	W0213 15:13:14.375800    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:13:14.375865    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:13:14.386374    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:13:14.386389    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:13:14.386395    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:13:14.402129    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:13:14.402141    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:13:14.417398    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:13:14.417412    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:13:14.435439    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:13:14.435458    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:13:14.448726    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:13:14.448740    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:13:14.486355    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:13:14.486371    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:13:14.501592    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:13:14.501605    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:13:14.513890    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:13:14.513903    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:13:14.518234    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:13:14.518242    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:13:14.531879    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:13:14.531889    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:13:14.557492    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:13:14.557506    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:13:14.569901    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:13:14.569911    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:13:14.581462    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:13:14.581475    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:13:14.593643    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:13:14.593656    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:13:14.633412    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:13:14.633427    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:13:17.149448    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:22.151684    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:22.151902    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:13:22.179725    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:13:22.179811    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:13:22.192868    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:13:22.192944    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:13:22.204501    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:13:22.204578    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:13:22.215673    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:13:22.215745    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:13:22.226655    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:13:22.226721    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:13:22.237369    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:13:22.237430    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:13:22.248250    3378 logs.go:276] 0 containers: []
	W0213 15:13:22.248260    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:13:22.248318    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:13:22.258678    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:13:22.258695    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:13:22.258702    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:13:22.270610    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:13:22.270623    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:13:22.282629    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:13:22.282640    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:13:22.305141    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:13:22.305150    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:13:22.316511    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:13:22.316522    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:13:22.330735    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:13:22.330748    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:13:22.344599    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:13:22.344610    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:13:22.360044    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:13:22.360055    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:13:22.375635    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:13:22.375644    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:13:22.411982    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:13:22.411992    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:13:22.424336    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:13:22.424347    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:13:22.442841    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:13:22.442856    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:13:22.452457    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:13:22.452467    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:13:22.487710    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:13:22.487721    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:13:22.500102    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:13:22.500113    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:13:25.019842    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:30.021971    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:30.022190    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:13:30.043066    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:13:30.043168    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:13:30.059070    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:13:30.059139    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:13:30.072248    3378 logs.go:276] 4 containers: [70cce5993e6c 61a7fa439749 8ae24fa2b68a 5966c105587d]
	I0213 15:13:30.072314    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:13:30.083956    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:13:30.084027    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:13:30.103318    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:13:30.103386    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:13:30.113485    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:13:30.113554    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:13:30.123616    3378 logs.go:276] 0 containers: []
	W0213 15:13:30.123626    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:13:30.123683    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:13:30.134026    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:13:30.134042    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:13:30.134047    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:13:30.146801    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:13:30.146811    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:13:30.158551    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:13:30.158562    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:13:30.192251    3378 logs.go:123] Gathering logs for coredns [70cce5993e6c] ...
	I0213 15:13:30.192264    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70cce5993e6c"
	I0213 15:13:30.203242    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:13:30.203253    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:13:30.221105    3378 logs.go:123] Gathering logs for coredns [61a7fa439749] ...
	I0213 15:13:30.221116    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a7fa439749"
	I0213 15:13:30.232531    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:13:30.232541    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:13:30.270277    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:13:30.270291    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:13:30.284977    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:13:30.284989    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:13:30.299048    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:13:30.299058    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:13:30.310668    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:13:30.310679    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:13:30.331802    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:13:30.331812    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:13:30.353903    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:13:30.353909    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:13:30.358507    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:13:30.358513    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:13:30.370140    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:13:30.370150    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:13:32.883670    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:37.885876    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:37.886109    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:13:37.905772    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:13:37.905871    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:13:37.919752    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:13:37.919830    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:13:37.933863    3378 logs.go:276] 4 containers: [70cce5993e6c 61a7fa439749 8ae24fa2b68a 5966c105587d]
	I0213 15:13:37.933936    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:13:37.944030    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:13:37.944094    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:13:37.955655    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:13:37.955716    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:13:37.965947    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:13:37.966022    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:13:37.976376    3378 logs.go:276] 0 containers: []
	W0213 15:13:37.976388    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:13:37.976440    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:13:37.986734    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:13:37.986748    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:13:37.986754    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:13:38.021307    3378 logs.go:123] Gathering logs for coredns [61a7fa439749] ...
	I0213 15:13:38.021321    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a7fa439749"
	I0213 15:13:38.033436    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:13:38.033447    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:13:38.045827    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:13:38.045838    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:13:38.058621    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:13:38.058635    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:13:38.070537    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:13:38.070547    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:13:38.106192    3378 logs.go:123] Gathering logs for coredns [70cce5993e6c] ...
	I0213 15:13:38.106209    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70cce5993e6c"
	I0213 15:13:38.117910    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:13:38.117921    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:13:38.133718    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:13:38.133735    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:13:38.145398    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:13:38.145408    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:13:38.150244    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:13:38.150253    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:13:38.168175    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:13:38.168185    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:13:38.182295    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:13:38.182305    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:13:38.197564    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:13:38.197577    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:13:38.215187    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:13:38.215197    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:13:40.739572    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:45.741901    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:45.742128    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:13:45.763705    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:13:45.763800    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:13:45.778449    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:13:45.778518    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:13:45.790972    3378 logs.go:276] 4 containers: [70cce5993e6c 61a7fa439749 8ae24fa2b68a 5966c105587d]
	I0213 15:13:45.791043    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:13:45.802065    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:13:45.802128    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:13:45.812915    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:13:45.812979    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:13:45.823174    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:13:45.823236    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:13:45.833385    3378 logs.go:276] 0 containers: []
	W0213 15:13:45.833393    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:13:45.833446    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:13:45.844356    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:13:45.844373    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:13:45.844378    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:13:45.857946    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:13:45.857956    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:13:45.876155    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:13:45.876166    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:13:45.911840    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:13:45.911847    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:13:45.916097    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:13:45.916103    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:13:45.950594    3378 logs.go:123] Gathering logs for coredns [70cce5993e6c] ...
	I0213 15:13:45.950604    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70cce5993e6c"
	I0213 15:13:45.963358    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:13:45.963369    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:13:45.975879    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:13:45.975894    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:13:45.987407    3378 logs.go:123] Gathering logs for coredns [61a7fa439749] ...
	I0213 15:13:45.987416    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a7fa439749"
	I0213 15:13:45.999550    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:13:45.999562    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:13:46.015029    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:13:46.015040    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:13:46.037647    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:13:46.037656    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:13:46.052718    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:13:46.052728    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:13:46.064933    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:13:46.064943    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:13:46.076139    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:13:46.076150    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:13:48.590017    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:53.592165    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:53.595661    3378 out.go:177] 
	W0213 15:13:53.599668    3378 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0213 15:13:53.599679    3378 out.go:239] * 
	* 
	W0213 15:13:53.600487    3378 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:13:53.611621    3378 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-781000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2024-02-13 15:13:53.698667 -0800 PST m=+2109.632778084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-781000 -n running-upgrade-781000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-781000 -n running-upgrade-781000: exit status 2 (15.748581584s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-781000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-294000          | force-systemd-flag-294000 | jenkins | v1.32.0 | 13 Feb 24 15:03 PST |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-056000              | force-systemd-env-056000  | jenkins | v1.32.0 | 13 Feb 24 15:03 PST |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-056000           | force-systemd-env-056000  | jenkins | v1.32.0 | 13 Feb 24 15:03 PST | 13 Feb 24 15:03 PST |
	| start   | -p docker-flags-818000                | docker-flags-818000       | jenkins | v1.32.0 | 13 Feb 24 15:03 PST |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-294000             | force-systemd-flag-294000 | jenkins | v1.32.0 | 13 Feb 24 15:03 PST |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-294000          | force-systemd-flag-294000 | jenkins | v1.32.0 | 13 Feb 24 15:03 PST | 13 Feb 24 15:03 PST |
	| start   | -p cert-expiration-172000             | cert-expiration-172000    | jenkins | v1.32.0 | 13 Feb 24 15:03 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-818000 ssh               | docker-flags-818000       | jenkins | v1.32.0 | 13 Feb 24 15:03 PST |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-818000 ssh               | docker-flags-818000       | jenkins | v1.32.0 | 13 Feb 24 15:03 PST |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-818000                | docker-flags-818000       | jenkins | v1.32.0 | 13 Feb 24 15:03 PST | 13 Feb 24 15:03 PST |
	| start   | -p cert-options-732000                | cert-options-732000       | jenkins | v1.32.0 | 13 Feb 24 15:03 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-732000 ssh               | cert-options-732000       | jenkins | v1.32.0 | 13 Feb 24 15:03 PST |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-732000 -- sudo        | cert-options-732000       | jenkins | v1.32.0 | 13 Feb 24 15:03 PST |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-732000                | cert-options-732000       | jenkins | v1.32.0 | 13 Feb 24 15:03 PST | 13 Feb 24 15:03 PST |
	| start   | -p running-upgrade-781000             | minikube                  | jenkins | v1.26.0 | 13 Feb 24 15:03 PST | 13 Feb 24 15:04 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-781000             | running-upgrade-781000    | jenkins | v1.32.0 | 13 Feb 24 15:04 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-172000             | cert-expiration-172000    | jenkins | v1.32.0 | 13 Feb 24 15:06 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-172000             | cert-expiration-172000    | jenkins | v1.32.0 | 13 Feb 24 15:06 PST | 13 Feb 24 15:06 PST |
	| start   | -p kubernetes-upgrade-274000          | kubernetes-upgrade-274000 | jenkins | v1.32.0 | 13 Feb 24 15:06 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-274000          | kubernetes-upgrade-274000 | jenkins | v1.32.0 | 13 Feb 24 15:06 PST | 13 Feb 24 15:06 PST |
	| start   | -p kubernetes-upgrade-274000          | kubernetes-upgrade-274000 | jenkins | v1.32.0 | 13 Feb 24 15:06 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-274000          | kubernetes-upgrade-274000 | jenkins | v1.32.0 | 13 Feb 24 15:06 PST | 13 Feb 24 15:06 PST |
	| start   | -p stopped-upgrade-809000             | minikube                  | jenkins | v1.26.0 | 13 Feb 24 15:06 PST | 13 Feb 24 15:07 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-809000 stop           | minikube                  | jenkins | v1.26.0 | 13 Feb 24 15:07 PST | 13 Feb 24 15:07 PST |
	| start   | -p stopped-upgrade-809000             | stopped-upgrade-809000    | jenkins | v1.32.0 | 13 Feb 24 15:07 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 15:07:53
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 15:07:53.493226    3510 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:07:53.493393    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:07:53.493398    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:07:53.493402    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:07:53.493573    3510 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:07:53.494746    3510 out.go:298] Setting JSON to false
	I0213 15:07:53.513584    3510 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2095,"bootTime":1707863578,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:07:53.513642    3510 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:07:53.518882    3510 out.go:177] * [stopped-upgrade-809000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:07:53.525855    3510 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:07:53.529846    3510 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:07:53.525904    3510 notify.go:220] Checking for updates...
	I0213 15:07:53.531314    3510 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:07:53.534838    3510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:07:53.537865    3510 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:07:53.540927    3510 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:07:53.545083    3510 config.go:182] Loaded profile config "stopped-upgrade-809000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0213 15:07:53.550185    3510 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0213 15:07:53.553016    3510 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:07:53.557884    3510 out.go:177] * Using the qemu2 driver based on existing profile
	I0213 15:07:53.564709    3510 start.go:298] selected driver: qemu2
	I0213 15:07:53.564715    3510 start.go:902] validating driver "qemu2" against &{Name:stopped-upgrade-809000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50344 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:sto
pped-upgrade-809000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:07:53.564769    3510 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:07:53.567480    3510 cni.go:84] Creating CNI manager for ""
	I0213 15:07:53.567494    3510 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:07:53.567501    3510 start_flags.go:321] config:
	{Name:stopped-upgrade-809000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50344 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-809000 Namespace:default APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:07:53.567592    3510 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:07:53.574829    3510 out.go:177] * Starting control plane node stopped-upgrade-809000 in cluster stopped-upgrade-809000
	I0213 15:07:53.578846    3510 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0213 15:07:53.578862    3510 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0213 15:07:53.578872    3510 cache.go:56] Caching tarball of preloaded images
	I0213 15:07:53.578933    3510 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:07:53.578939    3510 cache.go:59] Finished verifying existence of preloaded tar for  v1.24.1 on docker
	I0213 15:07:53.579005    3510 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/config.json ...
	I0213 15:07:53.579522    3510 start.go:365] acquiring machines lock for stopped-upgrade-809000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:07:53.579564    3510 start.go:369] acquired machines lock for "stopped-upgrade-809000" in 35.75µs
	I0213 15:07:53.579573    3510 start.go:96] Skipping create...Using existing machine configuration
	I0213 15:07:53.579579    3510 fix.go:54] fixHost starting: 
	I0213 15:07:53.579696    3510 fix.go:102] recreateIfNeeded on stopped-upgrade-809000: state=Stopped err=<nil>
	W0213 15:07:53.579704    3510 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 15:07:53.587810    3510 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-809000" ...
	I0213 15:07:56.306750    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:07:56.307431    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:07:56.346186    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:07:56.346353    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:07:56.376903    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:07:56.376987    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:07:56.390726    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:07:56.390803    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:07:56.402233    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:07:56.402302    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:07:56.413022    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:07:56.413092    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:07:56.424163    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:07:56.424244    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:07:56.435005    3378 logs.go:276] 0 containers: []
	W0213 15:07:56.435016    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:07:56.435077    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:07:56.445660    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:07:56.445674    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:07:56.445680    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:07:56.459682    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:07:56.459691    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:07:56.470858    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:07:56.470869    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:07:56.495687    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:07:56.495696    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:07:56.533029    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:07:56.533038    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:07:56.544598    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:07:56.544611    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:07:56.556318    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:07:56.556331    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:07:56.580299    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:07:56.580309    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:07:56.597254    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:07:56.597267    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:07:56.609012    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:07:56.609022    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:07:56.627227    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:07:56.627240    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:07:56.666274    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:07:56.666286    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:07:56.679926    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:07:56.679935    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:07:56.694029    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:07:56.694040    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:07:56.709341    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:07:56.709352    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:07:56.723941    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:07:56.723950    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:07:56.735062    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:07:56.735074    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:07:53.591893    3510 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50309-:22,hostfwd=tcp::50310-:2376,hostname=stopped-upgrade-809000 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/disk.qcow2
	I0213 15:07:53.640304    3510 main.go:141] libmachine: STDOUT: 
	I0213 15:07:53.640337    3510 main.go:141] libmachine: STDERR: 
	I0213 15:07:53.640359    3510 main.go:141] libmachine: Waiting for VM to start (ssh -p 50309 docker@127.0.0.1)...
	I0213 15:07:59.239914    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:04.241931    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:04.242044    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:08:04.254337    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:08:04.254410    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:08:04.266933    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:08:04.267010    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:08:04.279566    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:08:04.279652    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:08:04.291745    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:08:04.291829    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:08:04.303567    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:08:04.303635    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:08:04.320388    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:08:04.320470    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:08:04.332625    3378 logs.go:276] 0 containers: []
	W0213 15:08:04.332635    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:08:04.332692    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:08:04.348501    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:08:04.348517    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:08:04.348524    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:08:04.364905    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:08:04.364921    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:08:04.382178    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:08:04.382189    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:08:04.395562    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:08:04.395575    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:08:04.437546    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:08:04.437558    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:08:04.455537    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:08:04.455556    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:08:04.471858    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:08:04.471870    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:08:04.486031    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:08:04.486042    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:08:04.530154    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:08:04.530173    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:08:04.556260    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:08:04.556271    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:08:04.574553    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:08:04.574564    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:08:04.597844    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:08:04.597859    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:08:04.602191    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:08:04.602199    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:08:04.616470    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:08:04.616484    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:08:04.628594    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:08:04.628606    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:08:04.645916    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:08:04.645927    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:08:04.661731    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:08:04.661744    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:08:07.184317    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:12.186698    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:12.187109    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:08:12.226729    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:08:12.226864    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:08:12.248372    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:08:12.248486    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:08:12.271787    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:08:12.271864    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:08:12.283311    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:08:12.283383    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:08:12.293988    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:08:12.294062    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:08:12.304845    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:08:12.304912    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:08:12.315198    3378 logs.go:276] 0 containers: []
	W0213 15:08:12.315209    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:08:12.315268    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:08:12.335172    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:08:12.335188    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:08:12.335194    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:08:12.346262    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:08:12.346272    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:08:12.350441    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:08:12.350449    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:08:12.365464    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:08:12.365475    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:08:12.382920    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:08:12.382929    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:08:12.393753    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:08:12.393764    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:08:12.430376    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:08:12.430389    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:08:12.444893    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:08:12.444903    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:08:12.456596    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:08:12.456608    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:08:14.283493    3510 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/config.json ...
	I0213 15:08:14.284156    3510 machine.go:88] provisioning docker machine ...
	I0213 15:08:14.284209    3510 buildroot.go:166] provisioning hostname "stopped-upgrade-809000"
	I0213 15:08:14.284379    3510 main.go:141] libmachine: Using SSH client type: native
	I0213 15:08:14.285148    3510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10049b8e0] 0x10049e050 <nil>  [] 0s} localhost 50309 <nil> <nil>}
	I0213 15:08:14.285168    3510 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-809000 && echo "stopped-upgrade-809000" | sudo tee /etc/hostname
	I0213 15:08:14.381516    3510 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-809000
	
	I0213 15:08:14.381600    3510 main.go:141] libmachine: Using SSH client type: native
	I0213 15:08:14.381975    3510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10049b8e0] 0x10049e050 <nil>  [] 0s} localhost 50309 <nil> <nil>}
	I0213 15:08:14.381988    3510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-809000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-809000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-809000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 15:08:14.453530    3510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 15:08:14.453544    3510 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18170-979/.minikube CaCertPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18170-979/.minikube}
	I0213 15:08:14.453554    3510 buildroot.go:174] setting up certificates
	I0213 15:08:14.453566    3510 provision.go:83] configureAuth start
	I0213 15:08:14.453570    3510 provision.go:138] copyHostCerts
	I0213 15:08:14.453687    3510 exec_runner.go:144] found /Users/jenkins/minikube-integration/18170-979/.minikube/ca.pem, removing ...
	I0213 15:08:14.453696    3510 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18170-979/.minikube/ca.pem
	I0213 15:08:14.453839    3510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18170-979/.minikube/ca.pem (1078 bytes)
	I0213 15:08:14.454054    3510 exec_runner.go:144] found /Users/jenkins/minikube-integration/18170-979/.minikube/cert.pem, removing ...
	I0213 15:08:14.454060    3510 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18170-979/.minikube/cert.pem
	I0213 15:08:14.454119    3510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18170-979/.minikube/cert.pem (1123 bytes)
	I0213 15:08:14.454240    3510 exec_runner.go:144] found /Users/jenkins/minikube-integration/18170-979/.minikube/key.pem, removing ...
	I0213 15:08:14.454244    3510 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18170-979/.minikube/key.pem
	I0213 15:08:14.454302    3510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18170-979/.minikube/key.pem (1675 bytes)
	I0213 15:08:14.454412    3510 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18170-979/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-809000 san=[127.0.0.1 localhost localhost 127.0.0.1 minikube stopped-upgrade-809000]
	I0213 15:08:14.487904    3510 provision.go:172] copyRemoteCerts
	I0213 15:08:14.487934    3510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 15:08:14.487941    3510 sshutil.go:53] new ssh client: &{IP:localhost Port:50309 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/id_rsa Username:docker}
	I0213 15:08:14.522972    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0213 15:08:14.529725    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 15:08:14.536705    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 15:08:14.543914    3510 provision.go:86] duration metric: configureAuth took 90.346042ms
	I0213 15:08:14.543922    3510 buildroot.go:189] setting minikube options for container-runtime
	I0213 15:08:14.544038    3510 config.go:182] Loaded profile config "stopped-upgrade-809000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0213 15:08:14.544072    3510 main.go:141] libmachine: Using SSH client type: native
	I0213 15:08:14.544294    3510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10049b8e0] 0x10049e050 <nil>  [] 0s} localhost 50309 <nil> <nil>}
	I0213 15:08:14.544300    3510 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0213 15:08:14.609769    3510 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0213 15:08:14.609778    3510 buildroot.go:70] root file system type: tmpfs
	I0213 15:08:14.609833    3510 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0213 15:08:14.609889    3510 main.go:141] libmachine: Using SSH client type: native
	I0213 15:08:14.610143    3510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10049b8e0] 0x10049e050 <nil>  [] 0s} localhost 50309 <nil> <nil>}
	I0213 15:08:14.610181    3510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0213 15:08:14.678717    3510 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0213 15:08:14.678766    3510 main.go:141] libmachine: Using SSH client type: native
	I0213 15:08:14.679035    3510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10049b8e0] 0x10049e050 <nil>  [] 0s} localhost 50309 <nil> <nil>}
	I0213 15:08:14.679045    3510 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0213 15:08:15.036824    3510 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0213 15:08:15.036837    3510 machine.go:91] provisioned docker machine in 752.68725ms
	I0213 15:08:15.036842    3510 start.go:300] post-start starting for "stopped-upgrade-809000" (driver="qemu2")
	I0213 15:08:15.036849    3510 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 15:08:15.036916    3510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 15:08:15.036925    3510 sshutil.go:53] new ssh client: &{IP:localhost Port:50309 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/id_rsa Username:docker}
	I0213 15:08:15.071947    3510 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 15:08:15.073138    3510 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 15:08:15.073144    3510 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18170-979/.minikube/addons for local assets ...
	I0213 15:08:15.073209    3510 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18170-979/.minikube/files for local assets ...
	I0213 15:08:15.073320    3510 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem -> 14072.pem in /etc/ssl/certs
	I0213 15:08:15.073440    3510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 15:08:15.076238    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem --> /etc/ssl/certs/14072.pem (1708 bytes)
	I0213 15:08:15.083111    3510 start.go:303] post-start completed in 46.264583ms
	I0213 15:08:15.083119    3510 fix.go:56] fixHost completed within 21.50400075s
	I0213 15:08:15.083156    3510 main.go:141] libmachine: Using SSH client type: native
	I0213 15:08:15.083394    3510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10049b8e0] 0x10049e050 <nil>  [] 0s} localhost 50309 <nil> <nil>}
	I0213 15:08:15.083399    3510 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 15:08:15.149231    3510 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707865695.424839254
	
	I0213 15:08:15.149238    3510 fix.go:206] guest clock: 1707865695.424839254
	I0213 15:08:15.149242    3510 fix.go:219] Guest: 2024-02-13 15:08:15.424839254 -0800 PST Remote: 2024-02-13 15:08:15.08312 -0800 PST m=+21.623028584 (delta=341.719254ms)
	I0213 15:08:15.149251    3510 fix.go:190] guest clock delta is within tolerance: 341.719254ms
	I0213 15:08:15.149257    3510 start.go:83] releasing machines lock for "stopped-upgrade-809000", held for 21.570149583s
	I0213 15:08:15.149304    3510 ssh_runner.go:195] Run: cat /version.json
	I0213 15:08:15.149311    3510 sshutil.go:53] new ssh client: &{IP:localhost Port:50309 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/id_rsa Username:docker}
	I0213 15:08:15.149330    3510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 15:08:15.149347    3510 sshutil.go:53] new ssh client: &{IP:localhost Port:50309 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/id_rsa Username:docker}
	W0213 15:08:15.149985    3510 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50309: connect: connection refused
	I0213 15:08:15.150001    3510 retry.go:31] will retry after 298.580206ms: dial tcp [::1]:50309: connect: connection refused
	W0213 15:08:15.181992    3510 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0213 15:08:15.182032    3510 ssh_runner.go:195] Run: systemctl --version
	I0213 15:08:15.184398    3510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 15:08:15.185811    3510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 15:08:15.185838    3510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0213 15:08:15.188728    3510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0213 15:08:15.193701    3510 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 15:08:15.193708    3510 start.go:475] detecting cgroup driver to use...
	I0213 15:08:15.193776    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 15:08:15.200195    3510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0213 15:08:15.203351    3510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 15:08:15.206096    3510 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 15:08:15.206125    3510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 15:08:15.209206    3510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 15:08:15.212664    3510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 15:08:15.215948    3510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 15:08:15.218876    3510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 15:08:15.221741    3510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 15:08:15.224976    3510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 15:08:15.228027    3510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 15:08:15.230688    3510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:08:15.310645    3510 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 15:08:15.316677    3510 start.go:475] detecting cgroup driver to use...
	I0213 15:08:15.316733    3510 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 15:08:15.322175    3510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 15:08:15.327007    3510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 15:08:15.335182    3510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 15:08:15.339309    3510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 15:08:15.344001    3510 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0213 15:08:15.400865    3510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 15:08:15.406315    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 15:08:15.411684    3510 ssh_runner.go:195] Run: which cri-dockerd
	I0213 15:08:15.412932    3510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 15:08:15.415828    3510 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 15:08:15.420525    3510 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 15:08:15.511129    3510 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 15:08:15.589948    3510 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 15:08:15.590012    3510 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 15:08:15.596132    3510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:08:15.673165    3510 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 15:08:16.830868    3510 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.157709333s)
	I0213 15:08:16.830941    3510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0213 15:08:16.835288    3510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 15:08:16.839436    3510 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0213 15:08:16.919344    3510 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0213 15:08:17.005101    3510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:08:17.086350    3510 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0213 15:08:17.092329    3510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 15:08:17.097152    3510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:08:17.177825    3510 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0213 15:08:17.215949    3510 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0213 15:08:17.216026    3510 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0213 15:08:17.218087    3510 start.go:543] Will wait 60s for crictl version
	I0213 15:08:17.218125    3510 ssh_runner.go:195] Run: which crictl
	I0213 15:08:17.219706    3510 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 15:08:17.235610    3510 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0213 15:08:17.235680    3510 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 15:08:17.252843    3510 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 15:08:12.471405    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:08:12.471417    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:08:12.482999    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:08:12.483010    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:08:12.494722    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:08:12.494732    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:08:12.532712    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:08:12.532721    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:08:12.546519    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:08:12.546529    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:08:12.562871    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:08:12.562882    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:08:12.585692    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:08:12.585700    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:08:12.612681    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:08:12.612691    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:08:15.130489    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:17.277619    3510 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0213 15:08:17.277701    3510 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0213 15:08:17.279066    3510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 15:08:17.282629    3510 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0213 15:08:17.282670    3510 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 15:08:17.293535    3510 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0213 15:08:17.293544    3510 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0213 15:08:17.293592    3510 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 15:08:17.297041    3510 ssh_runner.go:195] Run: which lz4
	I0213 15:08:17.298293    3510 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 15:08:17.299450    3510 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 15:08:17.299460    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0213 15:08:18.036198    3510 docker.go:649] Took 0.737942 seconds to copy over tarball
	I0213 15:08:18.036252    3510 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 15:08:20.132621    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:20.133231    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:08:20.174786    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:08:20.174912    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:08:20.192981    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:08:20.193068    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:08:20.206255    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:08:20.206324    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:08:20.217799    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:08:20.217867    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:08:20.228506    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:08:20.228570    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:08:20.238914    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:08:20.238976    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:08:20.248952    3378 logs.go:276] 0 containers: []
	W0213 15:08:20.248962    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:08:20.249023    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:08:20.259086    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:08:20.259103    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:08:20.259110    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:08:20.295589    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:08:20.295600    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:08:20.310360    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:08:20.310375    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:08:20.335680    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:08:20.335697    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:08:20.353727    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:08:20.353744    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:08:20.370705    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:08:20.370715    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:08:20.411844    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:08:20.411859    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:08:20.423404    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:08:20.423415    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:08:20.427894    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:08:20.427902    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:08:20.439615    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:08:20.439626    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:08:20.450891    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:08:20.450902    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:08:20.474372    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:08:20.474381    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:08:20.485968    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:08:20.485978    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:08:20.500700    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:08:20.500711    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:08:20.521566    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:08:20.521577    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:08:20.536947    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:08:20.536957    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:08:20.554755    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:08:20.554764    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:08:19.213501    3510 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.17726125s)
	I0213 15:08:19.213515    3510 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 15:08:19.229374    3510 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 15:08:19.232707    3510 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0213 15:08:19.237888    3510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:08:19.315220    3510 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 15:08:21.593089    3510 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.277900583s)
	I0213 15:08:21.593187    3510 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 15:08:21.607633    3510 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0213 15:08:21.607640    3510 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0213 15:08:21.607645    3510 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 15:08:21.622649    3510 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0213 15:08:21.622726    3510 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:08:21.622747    3510 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0213 15:08:21.622831    3510 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0213 15:08:21.622847    3510 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0213 15:08:21.622937    3510 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0213 15:08:21.622973    3510 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0213 15:08:21.622994    3510 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0213 15:08:21.631606    3510 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0213 15:08:21.631641    3510 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0213 15:08:21.631670    3510 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0213 15:08:21.631798    3510 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0213 15:08:21.631839    3510 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0213 15:08:21.632267    3510 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0213 15:08:21.632407    3510 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:08:21.632532    3510 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0213 15:08:23.068344    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:23.852743    3510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0213 15:08:23.880462    3510 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0213 15:08:23.880501    3510 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0213 15:08:23.880593    3510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0213 15:08:23.899467    3510 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0213 15:08:23.902984    3510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0213 15:08:23.919878    3510 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0213 15:08:23.919901    3510 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0213 15:08:23.919956    3510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0213 15:08:23.931036    3510 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0213 15:08:23.932483    3510 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0213 15:08:23.934200    3510 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0213 15:08:23.934214    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0213 15:08:23.941497    3510 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0213 15:08:23.941507    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0213 15:08:23.950165    3510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0213 15:08:23.975993    3510 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0213 15:08:23.976024    3510 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0213 15:08:23.976043    3510 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0213 15:08:23.976100    3510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0213 15:08:23.986369    3510 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0213 15:08:23.988194    3510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0213 15:08:23.997956    3510 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0213 15:08:23.997976    3510 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0213 15:08:23.998038    3510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0213 15:08:24.003256    3510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0213 15:08:24.004535    3510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0213 15:08:24.010146    3510 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W0213 15:08:24.011792    3510 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0213 15:08:24.011908    3510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0213 15:08:24.015977    3510 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0213 15:08:24.015997    3510 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0213 15:08:24.016043    3510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0213 15:08:24.029878    3510 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0213 15:08:24.029898    3510 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0213 15:08:24.029953    3510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0213 15:08:24.030280    3510 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0213 15:08:24.030290    3510 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0213 15:08:24.030313    3510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0213 15:08:24.045767    3510 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0213 15:08:24.045787    3510 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0213 15:08:24.045818    3510 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0213 15:08:24.045883    3510 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0213 15:08:24.047294    3510 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0213 15:08:24.047305    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0213 15:08:24.084120    3510 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0213 15:08:24.084133    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0213 15:08:24.120713    3510 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0213 15:08:24.475156    3510 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0213 15:08:24.475402    3510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:08:24.495775    3510 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0213 15:08:24.495803    3510 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:08:24.495884    3510 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:08:24.516072    3510 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0213 15:08:24.516188    3510 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0213 15:08:24.517876    3510 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0213 15:08:24.517889    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0213 15:08:24.544664    3510 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0213 15:08:24.544682    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0213 15:08:24.781117    3510 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0213 15:08:24.781155    3510 cache_images.go:92] LoadImages completed in 3.173572208s
	W0213 15:08:24.781192    3510 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0213 15:08:24.781257    3510 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 15:08:24.794131    3510 cni.go:84] Creating CNI manager for ""
	I0213 15:08:24.794144    3510 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:08:24.794158    3510 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 15:08:24.794167    3510 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-809000 NodeName:stopped-upgrade-809000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 15:08:24.794242    3510 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-809000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 15:08:24.794275    3510 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-809000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-809000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 15:08:24.794325    3510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0213 15:08:24.797195    3510 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 15:08:24.797228    3510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 15:08:24.800183    3510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0213 15:08:24.805049    3510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 15:08:24.810047    3510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0213 15:08:24.815465    3510 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0213 15:08:24.816685    3510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 15:08:24.820232    3510 certs.go:56] Setting up /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000 for IP: 10.0.2.15
	I0213 15:08:24.820244    3510 certs.go:190] acquiring lock for shared ca certs: {Name:mk65e421691b8fb2c09fb65e08f20f9a769da9f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:08:24.820383    3510 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18170-979/.minikube/ca.key
	I0213 15:08:24.820428    3510 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.key
	I0213 15:08:24.820494    3510 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/client.key
	I0213 15:08:24.820539    3510 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/apiserver.key.49504c3e
	I0213 15:08:24.820583    3510 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/proxy-client.key
	I0213 15:08:24.820711    3510 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/1407.pem (1338 bytes)
	W0213 15:08:24.820743    3510 certs.go:433] ignoring /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/1407_empty.pem, impossibly tiny 0 bytes
	I0213 15:08:24.820749    3510 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 15:08:24.820777    3510 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem (1078 bytes)
	I0213 15:08:24.820805    3510 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem (1123 bytes)
	I0213 15:08:24.820830    3510 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/key.pem (1675 bytes)
	I0213 15:08:24.820883    3510 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem (1708 bytes)
	I0213 15:08:24.821223    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 15:08:24.828553    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0213 15:08:24.836094    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 15:08:24.843293    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 15:08:24.850289    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 15:08:24.856770    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 15:08:24.864118    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 15:08:24.870995    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0213 15:08:24.877726    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/certs/1407.pem --> /usr/share/ca-certificates/1407.pem (1338 bytes)
	I0213 15:08:24.884390    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem --> /usr/share/ca-certificates/14072.pem (1708 bytes)
	I0213 15:08:24.890760    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 15:08:24.897283    3510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 15:08:24.902488    3510 ssh_runner.go:195] Run: openssl version
	I0213 15:08:24.904413    3510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14072.pem && ln -fs /usr/share/ca-certificates/14072.pem /etc/ssl/certs/14072.pem"
	I0213 15:08:24.908009    3510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14072.pem
	I0213 15:08:24.909370    3510 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:48 /usr/share/ca-certificates/14072.pem
	I0213 15:08:24.909393    3510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14072.pem
	I0213 15:08:24.911152    3510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14072.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 15:08:24.913859    3510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 15:08:24.916763    3510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 15:08:24.918353    3510 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 22:40 /usr/share/ca-certificates/minikubeCA.pem
	I0213 15:08:24.918380    3510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 15:08:24.920268    3510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 15:08:24.923652    3510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1407.pem && ln -fs /usr/share/ca-certificates/1407.pem /etc/ssl/certs/1407.pem"
	I0213 15:08:24.926566    3510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1407.pem
	I0213 15:08:24.928013    3510 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:48 /usr/share/ca-certificates/1407.pem
	I0213 15:08:24.928034    3510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1407.pem
	I0213 15:08:24.929813    3510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1407.pem /etc/ssl/certs/51391683.0"
	I0213 15:08:24.933003    3510 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 15:08:24.934372    3510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 15:08:24.936191    3510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 15:08:24.938182    3510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 15:08:24.939838    3510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 15:08:24.941607    3510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 15:08:24.943271    3510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 15:08:24.944986    3510 kubeadm.go:404] StartCluster: {Name:stopped-upgrade-809000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50344 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 Clus
terName:stopped-upgrade-809000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:08:24.945057    3510 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 15:08:24.955455    3510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 15:08:24.958295    3510 host.go:66] Checking if "stopped-upgrade-809000" exists ...
	I0213 15:08:24.959334    3510 main.go:141] libmachine: Using SSH client type: external
	I0213 15:08:24.959348    3510 main.go:141] libmachine: Using SSH private key: /Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/id_rsa (-rw-------)
	I0213 15:08:24.959369    3510 main.go:141] libmachine: &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/id_rsa -p 50309] /usr/bin/ssh <nil>}
	I0213 15:08:24.959385    3510 main.go:141] libmachine: /usr/bin/ssh -F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/id_rsa -p 50309 -f -NTL 50344:localhost:8443
	I0213 15:08:25.000785    3510 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 15:08:25.000818    3510 kubeadm.go:636] restartCluster start
	I0213 15:08:25.000897    3510 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 15:08:25.004347    3510 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 15:08:25.004696    3510 kubeconfig.go:135] verify returned: extract IP: "stopped-upgrade-809000" does not appear in /Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:08:25.004795    3510 kubeconfig.go:146] "stopped-upgrade-809000" context is missing from /Users/jenkins/minikube-integration/18170-979/kubeconfig - will repair!
	I0213 15:08:25.004993    3510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/kubeconfig: {Name:mkf66d96abab1e512e6f2721c341e70e5b11c9ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:08:25.005443    3510 kapi.go:59] client config for stopped-upgrade-809000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/client.key", CAFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101777f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 15:08:25.005909    3510 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 15:08:25.008601    3510 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-809000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0213 15:08:25.008607    3510 kubeadm.go:1135] stopping kube-system containers ...
	I0213 15:08:25.008643    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 15:08:25.018984    3510 docker.go:483] Stopping containers: [414a8117b44a ae16ebf684a6 2c330ef72602 c9bca2ddc84e ea48366b9587 1d0476e0f407 ad6284b5b306 30659c73ce71]
	I0213 15:08:25.019049    3510 ssh_runner.go:195] Run: docker stop 414a8117b44a ae16ebf684a6 2c330ef72602 c9bca2ddc84e ea48366b9587 1d0476e0f407 ad6284b5b306 30659c73ce71
	I0213 15:08:25.029947    3510 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 15:08:25.035692    3510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 15:08:25.038381    3510 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 15:08:25.038409    3510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 15:08:25.041138    3510 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 15:08:25.041144    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 15:08:25.063725    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 15:08:25.346053    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 15:08:25.494183    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 15:08:25.524711    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 15:08:25.551786    3510 api_server.go:52] waiting for apiserver process to appear ...
	I0213 15:08:25.551871    3510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 15:08:26.054145    3510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 15:08:26.553926    3510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 15:08:27.053596    3510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 15:08:27.057644    3510 api_server.go:72] duration metric: took 1.5058935s to wait for apiserver process to appear ...
	I0213 15:08:27.057654    3510 api_server.go:88] waiting for apiserver healthz status ...
	I0213 15:08:27.057663    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:28.070760    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:28.071182    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:08:28.114382    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:08:28.114518    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:08:28.136835    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:08:28.136954    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:08:28.152234    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:08:28.152313    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:08:28.164691    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:08:28.164767    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:08:28.176596    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:08:28.176658    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:08:28.190491    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:08:28.190567    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:08:28.201225    3378 logs.go:276] 0 containers: []
	W0213 15:08:28.201238    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:08:28.201294    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:08:28.212537    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:08:28.212550    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:08:28.212556    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:08:28.226585    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:08:28.226596    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:08:28.238311    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:08:28.238321    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:08:28.262754    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:08:28.262765    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:08:28.274379    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:08:28.274389    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:08:28.289285    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:08:28.289295    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:08:28.300519    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:08:28.300530    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:08:28.317940    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:08:28.317951    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:08:28.330801    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:08:28.330813    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:08:28.342488    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:08:28.342499    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:08:28.347505    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:08:28.347510    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:08:28.387069    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:08:28.387080    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:08:28.401452    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:08:28.401465    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:08:28.418316    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:08:28.418331    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:08:28.460802    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:08:28.460817    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:08:28.486124    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:08:28.486145    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:08:28.501222    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:08:28.501234    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:08:31.015327    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:32.059716    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:32.059750    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:36.017514    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:36.017803    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:08:36.048945    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:08:36.049063    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:08:36.075104    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:08:36.075193    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:08:36.087432    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:08:36.087505    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:08:36.104262    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:08:36.104334    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:08:36.115214    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:08:36.115286    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:08:36.126026    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:08:36.126098    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:08:36.136405    3378 logs.go:276] 0 containers: []
	W0213 15:08:36.136422    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:08:36.136480    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:08:36.146921    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:08:36.146937    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:08:36.146943    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:08:36.151491    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:08:36.151500    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:08:36.165802    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:08:36.165811    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:08:36.181307    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:08:36.181315    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:08:36.193127    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:08:36.193138    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:08:36.205450    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:08:36.205460    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:08:36.219754    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:08:36.219763    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:08:36.231438    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:08:36.231450    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:08:36.255507    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:08:36.255517    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:08:36.266590    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:08:36.266603    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:08:36.282782    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:08:36.282792    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:08:36.297517    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:08:36.297528    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:08:36.321407    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:08:36.321415    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:08:36.358972    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:08:36.358982    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:08:36.394858    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:08:36.394872    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:08:36.407236    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:08:36.407248    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:08:36.428561    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:08:36.428572    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:08:37.059926    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:37.059994    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:38.942454    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:42.060332    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:42.060369    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:43.944600    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:43.944699    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:08:43.955826    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:08:43.955891    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:08:43.966713    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:08:43.966779    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:08:43.977141    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:08:43.977208    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:08:43.987849    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:08:43.987924    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:08:43.998387    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:08:43.998450    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:08:44.009674    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:08:44.009745    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:08:44.025067    3378 logs.go:276] 0 containers: []
	W0213 15:08:44.025076    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:08:44.025134    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:08:44.040806    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:08:44.040823    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:08:44.040829    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:08:44.079790    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:08:44.079801    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:08:44.093730    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:08:44.093739    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:08:44.108382    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:08:44.108392    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:08:44.119339    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:08:44.119351    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:08:44.130665    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:08:44.130676    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:08:44.154774    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:08:44.154782    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:08:44.172488    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:08:44.172520    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:08:44.184489    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:08:44.184502    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:08:44.196369    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:08:44.196379    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:08:44.213433    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:08:44.213445    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:08:44.252110    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:08:44.252122    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:08:44.279503    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:08:44.279515    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:08:44.297593    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:08:44.297604    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:08:44.310509    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:08:44.310520    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:08:44.314984    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:08:44.314991    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:08:44.336843    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:08:44.336854    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:08:46.864615    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:47.060772    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:47.060861    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:51.866517    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:51.866680    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:08:51.878223    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:08:51.878292    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:08:51.892632    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:08:51.892710    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:08:51.903376    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:08:51.903449    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:08:51.914603    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:08:51.914672    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:08:51.925616    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:08:51.925691    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:08:51.936497    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:08:51.936569    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:08:51.947012    3378 logs.go:276] 0 containers: []
	W0213 15:08:51.947028    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:08:51.947080    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:08:51.957096    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:08:51.957114    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:08:51.957120    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:08:51.968406    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:08:51.968417    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:08:51.984593    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:08:51.984607    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:08:51.997065    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:08:51.997080    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:08:52.001748    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:08:52.001757    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:08:52.018278    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:08:52.018289    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:08:52.029819    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:08:52.029829    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:08:52.060741    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:08:52.060755    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:08:52.076369    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:08:52.076380    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:08:52.093336    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:08:52.093347    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:08:52.104965    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:08:52.104976    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:08:52.116008    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:08:52.116019    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:08:52.138219    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:08:52.138228    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:08:52.176532    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:08:52.176540    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:08:52.211575    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:08:52.211586    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:08:52.223449    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:08:52.223462    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:08:52.237379    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:08:52.237390    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:08:52.061509    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:52.061528    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:54.752398    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:57.062828    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:57.062897    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:59.753262    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:59.753380    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:08:59.764658    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:08:59.764727    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:08:59.775636    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:08:59.775708    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:08:59.786346    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:08:59.786417    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:08:59.796776    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:08:59.796842    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:08:59.806930    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:08:59.806999    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:08:59.817141    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:08:59.817207    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:08:59.827423    3378 logs.go:276] 0 containers: []
	W0213 15:08:59.827435    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:08:59.827488    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:08:59.838695    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:08:59.838709    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:08:59.838715    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:08:59.863295    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:08:59.863306    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:08:59.877679    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:08:59.877695    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:08:59.892462    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:08:59.892477    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:08:59.910986    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:08:59.911008    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:08:59.931133    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:08:59.931145    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:08:59.943229    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:08:59.943239    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:08:59.947554    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:08:59.947561    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:08:59.959071    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:08:59.959082    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:08:59.973990    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:08:59.973998    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:08:59.986013    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:08:59.986027    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:09:00.024338    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:09:00.024354    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:09:00.059238    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:09:00.059253    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:09:00.071198    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:09:00.071211    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:09:00.094576    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:09:00.094585    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:09:00.109133    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:09:00.109143    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:09:00.120182    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:09:00.120191    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:09:02.064118    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:02.064148    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:09:02.633446    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:09:07.065666    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:07.065733    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:09:07.635559    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:07.635716    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:09:07.648020    3378 logs.go:276] 2 containers: [208760af4a08 8caf827e4484]
	I0213 15:09:07.648115    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:09:07.659356    3378 logs.go:276] 2 containers: [b8e87f0d0361 e46e2621e8f2]
	I0213 15:09:07.659425    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:09:07.669944    3378 logs.go:276] 1 containers: [bb93bb927f6e]
	I0213 15:09:07.670007    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:09:07.684476    3378 logs.go:276] 2 containers: [28d0ae065fef e2ce89c0b03f]
	I0213 15:09:07.684543    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:09:07.698202    3378 logs.go:276] 1 containers: [0f4e0a6896ae]
	I0213 15:09:07.698266    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:09:07.710336    3378 logs.go:276] 2 containers: [375240b38f86 62fb42b3b949]
	I0213 15:09:07.710400    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:09:07.720383    3378 logs.go:276] 0 containers: []
	W0213 15:09:07.720393    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:09:07.720445    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:09:07.731124    3378 logs.go:276] 2 containers: [ea2e9c4e8484 760aec18fa29]
	I0213 15:09:07.731139    3378 logs.go:123] Gathering logs for storage-provisioner [760aec18fa29] ...
	I0213 15:09:07.731146    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 760aec18fa29"
	I0213 15:09:07.742916    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:09:07.742927    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:09:07.782720    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:09:07.782734    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:09:07.787284    3378 logs.go:123] Gathering logs for kube-apiserver [8caf827e4484] ...
	I0213 15:09:07.787290    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8caf827e4484"
	I0213 15:09:07.812638    3378 logs.go:123] Gathering logs for etcd [e46e2621e8f2] ...
	I0213 15:09:07.812649    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e46e2621e8f2"
	I0213 15:09:07.827895    3378 logs.go:123] Gathering logs for kube-proxy [0f4e0a6896ae] ...
	I0213 15:09:07.827904    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4e0a6896ae"
	I0213 15:09:07.845438    3378 logs.go:123] Gathering logs for kube-controller-manager [375240b38f86] ...
	I0213 15:09:07.845451    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 375240b38f86"
	I0213 15:09:07.863670    3378 logs.go:123] Gathering logs for etcd [b8e87f0d0361] ...
	I0213 15:09:07.863681    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8e87f0d0361"
	I0213 15:09:07.877523    3378 logs.go:123] Gathering logs for coredns [bb93bb927f6e] ...
	I0213 15:09:07.877540    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb93bb927f6e"
	I0213 15:09:07.889364    3378 logs.go:123] Gathering logs for kube-scheduler [e2ce89c0b03f] ...
	I0213 15:09:07.889376    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ce89c0b03f"
	I0213 15:09:07.904198    3378 logs.go:123] Gathering logs for kube-controller-manager [62fb42b3b949] ...
	I0213 15:09:07.904209    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62fb42b3b949"
	I0213 15:09:07.915859    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:09:07.915870    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:09:07.939587    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:09:07.939595    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:09:07.974439    3378 logs.go:123] Gathering logs for kube-apiserver [208760af4a08] ...
	I0213 15:09:07.974452    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 208760af4a08"
	I0213 15:09:07.989271    3378 logs.go:123] Gathering logs for kube-scheduler [28d0ae065fef] ...
	I0213 15:09:07.989281    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28d0ae065fef"
	I0213 15:09:08.006213    3378 logs.go:123] Gathering logs for storage-provisioner [ea2e9c4e8484] ...
	I0213 15:09:08.006227    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea2e9c4e8484"
	I0213 15:09:08.018250    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:09:08.018261    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:09:10.532343    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:09:12.068425    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:12.068455    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:09:15.534784    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:15.534935    3378 kubeadm.go:640] restartCluster took 4m4.516227292s
	W0213 15:09:15.535091    3378 out.go:239] ! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
	I0213 15:09:15.535147    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0213 15:09:16.584137    3378 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.048996583s)
	I0213 15:09:16.584207    3378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 15:09:16.589044    3378 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 15:09:16.591755    3378 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 15:09:16.594724    3378 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 15:09:16.594740    3378 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 15:09:16.612161    3378 kubeadm.go:322] [init] Using Kubernetes version: v1.24.1
	I0213 15:09:16.612218    3378 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 15:09:16.659786    3378 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 15:09:16.659844    3378 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 15:09:16.659916    3378 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 15:09:16.708598    3378 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 15:09:16.712793    3378 out.go:204]   - Generating certificates and keys ...
	I0213 15:09:16.712825    3378 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 15:09:16.712862    3378 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 15:09:16.712900    3378 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 15:09:16.712944    3378 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 15:09:16.712980    3378 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 15:09:16.713016    3378 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 15:09:16.713051    3378 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 15:09:16.713083    3378 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 15:09:16.713126    3378 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 15:09:16.713162    3378 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 15:09:16.713187    3378 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 15:09:16.713225    3378 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 15:09:16.801873    3378 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 15:09:16.917161    3378 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 15:09:17.014021    3378 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 15:09:17.111883    3378 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 15:09:17.142112    3378 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 15:09:17.142484    3378 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 15:09:17.142514    3378 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 15:09:17.228035    3378 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 15:09:17.231323    3378 out.go:204]   - Booting up control plane ...
	I0213 15:09:17.231372    3378 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 15:09:17.231409    3378 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 15:09:17.231591    3378 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 15:09:17.231864    3378 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 15:09:17.232725    3378 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 15:09:17.070563    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:17.070582    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:09:21.236751    3378 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.003751 seconds
	I0213 15:09:21.236821    3378 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 15:09:21.240085    3378 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 15:09:21.758214    3378 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 15:09:21.758471    3378 kubeadm.go:322] [mark-control-plane] Marking the node running-upgrade-781000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 15:09:22.262732    3378 kubeadm.go:322] [bootstrap-token] Using token: cxpo2j.ezru91fdgin60m1s
	I0213 15:09:22.265412    3378 out.go:204]   - Configuring RBAC rules ...
	I0213 15:09:22.265478    3378 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 15:09:22.265574    3378 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 15:09:22.272752    3378 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 15:09:22.273583    3378 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 15:09:22.274674    3378 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 15:09:22.275417    3378 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 15:09:22.278687    3378 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 15:09:22.449872    3378 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 15:09:22.666903    3378 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 15:09:22.667380    3378 kubeadm.go:322] 
	I0213 15:09:22.667406    3378 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 15:09:22.667409    3378 kubeadm.go:322] 
	I0213 15:09:22.667439    3378 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 15:09:22.667441    3378 kubeadm.go:322] 
	I0213 15:09:22.667479    3378 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 15:09:22.667536    3378 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 15:09:22.667560    3378 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 15:09:22.667583    3378 kubeadm.go:322] 
	I0213 15:09:22.667608    3378 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 15:09:22.667610    3378 kubeadm.go:322] 
	I0213 15:09:22.667634    3378 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 15:09:22.667637    3378 kubeadm.go:322] 
	I0213 15:09:22.667660    3378 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 15:09:22.667703    3378 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 15:09:22.667751    3378 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 15:09:22.667754    3378 kubeadm.go:322] 
	I0213 15:09:22.667802    3378 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 15:09:22.667837    3378 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 15:09:22.667840    3378 kubeadm.go:322] 
	I0213 15:09:22.667878    3378 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token cxpo2j.ezru91fdgin60m1s \
	I0213 15:09:22.667929    3378 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59d33d1eb40ab9fa290fa08077c20062eca217e8d74d3cd8b9b4fd2d5d6aeb8d \
	I0213 15:09:22.667941    3378 kubeadm.go:322] 	--control-plane 
	I0213 15:09:22.667948    3378 kubeadm.go:322] 
	I0213 15:09:22.667985    3378 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 15:09:22.667988    3378 kubeadm.go:322] 
	I0213 15:09:22.668027    3378 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token cxpo2j.ezru91fdgin60m1s \
	I0213 15:09:22.668072    3378 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59d33d1eb40ab9fa290fa08077c20062eca217e8d74d3cd8b9b4fd2d5d6aeb8d 
	I0213 15:09:22.668181    3378 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 15:09:22.668190    3378 cni.go:84] Creating CNI manager for ""
	I0213 15:09:22.668207    3378 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:09:22.671499    3378 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 15:09:22.679447    3378 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 15:09:22.682910    3378 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 15:09:22.687874    3378 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 15:09:22.687922    3378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 15:09:22.687923    3378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=fb52fe04bc8b044b129ef2ff27607d20a9fceb93 minikube.k8s.io/name=running-upgrade-781000 minikube.k8s.io/updated_at=2024_02_13T15_09_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 15:09:22.730911    3378 ops.go:34] apiserver oom_adj: -16
	I0213 15:09:22.730982    3378 kubeadm.go:1088] duration metric: took 43.105584ms to wait for elevateKubeSystemPrivileges.
	I0213 15:09:22.744310    3378 host.go:66] Checking if "running-upgrade-781000" exists ...
	I0213 15:09:22.745368    3378 main.go:141] libmachine: Using SSH client type: external
	I0213 15:09:22.745386    3378 main.go:141] libmachine: Using SSH private key: /Users/jenkins/minikube-integration/18170-979/.minikube/machines/running-upgrade-781000/id_rsa (-rw-------)
	I0213 15:09:22.745401    3378 main.go:141] libmachine: &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18170-979/.minikube/machines/running-upgrade-781000/id_rsa -p 50111] /usr/bin/ssh <nil>}
	I0213 15:09:22.745413    3378 main.go:141] libmachine: /usr/bin/ssh -F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18170-979/.minikube/machines/running-upgrade-781000/id_rsa -p 50111 -f -NTL 50143:localhost:8443
	I0213 15:09:22.787808    3378 kubeadm.go:406] StartCluster complete in 4m11.826482833s
	I0213 15:09:22.787870    3378 settings.go:142] acquiring lock: {Name:mkdd6397441cfaf6d06a74b65d6ddefdb863237c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:09:22.787965    3378 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:09:22.788509    3378 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/kubeconfig: {Name:mkf66d96abab1e512e6f2721c341e70e5b11c9ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:09:22.788883    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 15:09:22.788987    3378 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 15:09:22.789033    3378 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-781000"
	I0213 15:09:22.789048    3378 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-781000"
	W0213 15:09:22.789051    3378 addons.go:243] addon storage-provisioner should already be in state true
	I0213 15:09:22.789074    3378 host.go:66] Checking if "running-upgrade-781000" exists ...
	I0213 15:09:22.789072    3378 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-781000"
	I0213 15:09:22.789083    3378 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-781000"
	I0213 15:09:22.789167    3378 config.go:182] Loaded profile config "running-upgrade-781000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0213 15:09:22.789311    3378 kapi.go:59] client config for running-upgrade-781000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/running-upgrade-781000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/running-upgrade-781000/client.key", CAFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104157f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 15:09:22.790232    3378 kapi.go:59] client config for running-upgrade-781000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/running-upgrade-781000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/running-upgrade-781000/client.key", CAFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104157f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 15:09:22.790374    3378 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-781000"
	W0213 15:09:22.790379    3378 addons.go:243] addon default-storageclass should already be in state true
	I0213 15:09:22.790386    3378 host.go:66] Checking if "running-upgrade-781000" exists ...
	I0213 15:09:22.794329    3378 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:09:22.072713    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:22.072757    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:09:22.798407    3378 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 15:09:22.798414    3378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 15:09:22.798423    3378 sshutil.go:53] new ssh client: &{IP:localhost Port:50111 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/running-upgrade-781000/id_rsa Username:docker}
	I0213 15:09:22.799223    3378 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 15:09:22.799228    3378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 15:09:22.799234    3378 sshutil.go:53] new ssh client: &{IP:localhost Port:50111 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/running-upgrade-781000/id_rsa Username:docker}
	I0213 15:09:22.825713    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           10.0.2.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 15:09:22.838588    3378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 15:09:22.903059    3378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 15:09:23.210970    3378 start.go:929] {"host.minikube.internal": 10.0.2.2} host record injected into CoreDNS's ConfigMap
	I0213 15:09:27.075076    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:27.075320    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:09:27.104011    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:09:27.104142    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:09:27.120389    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:09:27.120482    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:09:27.133433    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:09:27.133508    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:09:27.144811    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:09:27.144894    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:09:27.155641    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:09:27.155702    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:09:27.166246    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:09:27.166315    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:09:27.176987    3510 logs.go:276] 0 containers: []
	W0213 15:09:27.176998    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:09:27.177053    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:09:27.187520    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:09:27.187537    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:09:27.187543    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:09:27.192534    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:09:27.192541    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:09:27.209616    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:09:27.209625    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:09:27.237800    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:09:27.237810    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:09:27.263589    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:09:27.263599    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:09:27.278694    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:09:27.278705    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:09:27.290369    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:09:27.290381    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:09:27.307332    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:09:27.307342    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:09:27.318590    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:09:27.318601    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:09:27.332468    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:09:27.332478    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:09:27.458411    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:09:27.458423    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:09:27.469902    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:09:27.469915    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:09:27.482424    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:09:27.482435    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:09:27.501878    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:09:27.501890    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:09:27.516156    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:09:27.516164    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:09:27.529800    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:09:27.529811    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:09:27.546372    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:09:27.546383    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:09:30.060728    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:09:35.061607    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:35.061848    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:09:35.078133    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:09:35.078222    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:09:35.092781    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:09:35.092855    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:09:35.104021    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:09:35.104096    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:09:35.115004    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:09:35.115086    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:09:35.125538    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:09:35.125603    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:09:35.136337    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:09:35.136414    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:09:35.158566    3510 logs.go:276] 0 containers: []
	W0213 15:09:35.158578    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:09:35.158643    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:09:35.169052    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:09:35.169065    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:09:35.169072    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:09:35.183718    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:09:35.183732    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:09:35.198066    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:09:35.198079    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:09:35.210177    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:09:35.210189    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:09:35.224146    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:09:35.224159    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:09:35.236082    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:09:35.236092    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:09:35.247181    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:09:35.247192    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:09:35.261518    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:09:35.261525    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:09:35.276300    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:09:35.276314    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:09:35.288404    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:09:35.288415    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:09:35.307008    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:09:35.307019    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:09:35.331728    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:09:35.331737    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:09:35.343790    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:09:35.343800    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:09:35.382331    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:09:35.382342    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:09:35.409300    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:09:35.409311    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:09:35.420501    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:09:35.420514    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:09:35.435520    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:09:35.435530    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:09:37.941900    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:09:42.944106    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:42.944263    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:09:42.965267    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:09:42.965455    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:09:42.978090    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:09:42.978157    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:09:42.988850    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:09:42.988929    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:09:42.999983    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:09:43.000071    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:09:43.011009    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:09:43.011076    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:09:43.021982    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:09:43.022054    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:09:43.032362    3510 logs.go:276] 0 containers: []
	W0213 15:09:43.032382    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:09:43.032444    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:09:43.042914    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:09:43.042936    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:09:43.042941    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:09:43.058350    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:09:43.058360    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:09:43.080004    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:09:43.080016    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:09:43.103922    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:09:43.103930    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:09:43.116206    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:09:43.116217    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:09:43.155521    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:09:43.155531    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:09:43.175862    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:09:43.175873    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:09:43.200424    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:09:43.200437    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:09:43.212178    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:09:43.212190    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:09:43.226923    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:09:43.226933    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:09:43.239041    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:09:43.239050    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:09:43.252706    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:09:43.252716    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:09:43.267521    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:09:43.267529    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:09:43.271704    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:09:43.271711    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:09:43.283131    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:09:43.283145    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:09:43.298136    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:09:43.298147    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:09:43.312983    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:09:43.312993    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:09:45.826771    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0213 15:09:52.789561    3378 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "running-upgrade-781000" context to 1 replicas: non-retryable failure while getting "coredns" deployment scale: Get "https://10.0.2.15:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 10.0.2.15:8443: i/o timeout
	E0213 15:09:52.789578    3378 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://10.0.2.15:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 10.0.2.15:8443: i/o timeout
	I0213 15:09:52.789592    3378 start.go:223] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:09:52.793206    3378 out.go:177] * Verifying Kubernetes components...
	I0213 15:09:52.796868    3378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 15:09:52.802510    3378 api_server.go:52] waiting for apiserver process to appear ...
	I0213 15:09:52.802568    3378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 15:09:52.806777    3378 api_server.go:72] duration metric: took 17.170958ms to wait for apiserver process to appear ...
	I0213 15:09:52.806783    3378 api_server.go:88] waiting for apiserver healthz status ...
	I0213 15:09:52.806790    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0213 15:09:53.224643    3378 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0213 15:09:53.229314    3378 out.go:177] * Enabled addons: storage-provisioner
	I0213 15:09:50.827631    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:50.827811    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:09:50.849586    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:09:50.849701    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:09:50.864505    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:09:50.864585    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:09:50.878516    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:09:50.878580    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:09:50.889532    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:09:50.889597    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:09:50.899834    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:09:50.899905    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:09:50.910665    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:09:50.910753    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:09:50.920216    3510 logs.go:276] 0 containers: []
	W0213 15:09:50.920228    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:09:50.920291    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:09:50.930957    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:09:50.930971    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:09:50.930981    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:09:50.945391    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:09:50.945398    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:09:50.958905    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:09:50.958915    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:09:50.974427    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:09:50.974438    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:09:50.985796    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:09:50.985807    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:09:50.997466    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:09:50.997477    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:09:51.009148    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:09:51.009158    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:09:51.020454    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:09:51.020468    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:09:51.056995    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:09:51.057005    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:09:51.078680    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:09:51.078691    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:09:51.093156    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:09:51.093167    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:09:51.104541    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:09:51.104553    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:09:51.129590    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:09:51.129599    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:09:51.140915    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:09:51.140925    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:09:51.145155    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:09:51.145161    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:09:51.170727    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:09:51.170737    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:09:51.185479    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:09:51.185488    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:09:53.237211    3378 addons.go:505] enable addons completed in 30.448888167s: enabled=[storage-provisioner]
	I0213 15:09:53.704488    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:09:57.808735    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:57.808756    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:09:58.706666    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:58.706791    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:09:58.721084    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:09:58.721163    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:09:58.732717    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:09:58.732798    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:09:58.743535    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:09:58.743603    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:09:58.754573    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:09:58.754655    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:09:58.764835    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:09:58.764910    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:09:58.775345    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:09:58.775423    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:09:58.785797    3510 logs.go:276] 0 containers: []
	W0213 15:09:58.785808    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:09:58.785865    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:09:58.796630    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:09:58.796644    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:09:58.796650    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:09:58.811028    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:09:58.811039    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:09:58.822552    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:09:58.822568    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:09:58.837626    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:09:58.837638    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:09:58.849178    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:09:58.849189    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:09:58.867735    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:09:58.867749    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:09:58.879882    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:09:58.879893    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:09:58.884303    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:09:58.884309    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:09:58.919234    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:09:58.919245    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:09:58.934178    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:09:58.934186    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:09:58.948254    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:09:58.948264    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:09:58.959463    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:09:58.959472    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:09:58.973137    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:09:58.973147    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:09:58.990538    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:09:58.990548    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:09:59.004364    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:09:59.004374    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:09:59.028386    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:09:59.028394    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:09:59.053320    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:09:59.053332    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:10:01.566891    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:02.808859    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:02.808883    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:06.569080    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:06.569332    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:10:06.589049    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:10:06.589119    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:10:06.600672    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:10:06.600741    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:10:06.612999    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:10:06.613065    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:10:06.623476    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:10:06.623540    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:10:06.634298    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:10:06.634373    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:10:06.650220    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:10:06.650287    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:10:06.660778    3510 logs.go:276] 0 containers: []
	W0213 15:10:06.660791    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:10:06.660861    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:10:06.674322    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:10:06.674336    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:10:06.674342    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:10:06.688365    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:10:06.688375    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:10:06.712619    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:10:06.712632    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:10:06.724757    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:10:06.724767    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:10:06.736580    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:10:06.736590    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:10:06.749535    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:10:06.749546    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:10:06.764385    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:10:06.764396    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:10:06.775798    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:10:06.775810    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:10:06.787529    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:10:06.787539    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:10:06.802938    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:10:06.802950    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:10:06.814209    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:10:06.814219    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:10:06.828753    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:10:06.828760    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:10:06.832762    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:10:06.832768    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:10:06.868297    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:10:06.868309    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:10:06.882767    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:10:06.882776    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:10:06.907773    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:10:06.907785    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:10:06.924872    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:10:06.924882    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:10:07.809050    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:07.809086    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:09.441365    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:12.809407    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:12.809433    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:14.443679    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:14.443873    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:10:14.473948    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:10:14.474069    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:10:14.492039    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:10:14.492137    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:10:14.517058    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:10:14.517126    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:10:14.528008    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:10:14.528089    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:10:14.538799    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:10:14.538864    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:10:14.549443    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:10:14.549513    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:10:14.559455    3510 logs.go:276] 0 containers: []
	W0213 15:10:14.559464    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:10:14.559515    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:10:14.569914    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:10:14.569930    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:10:14.569939    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:10:14.583474    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:10:14.583483    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:10:14.595991    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:10:14.596001    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:10:14.610155    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:10:14.610166    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:10:14.628861    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:10:14.628875    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:10:14.640968    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:10:14.640980    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:10:14.656000    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:10:14.656007    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:10:14.681617    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:10:14.681627    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:10:14.705024    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:10:14.705035    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:10:14.721884    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:10:14.721895    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:10:14.733446    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:10:14.733458    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:10:14.737471    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:10:14.737478    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:10:14.751289    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:10:14.751297    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:10:14.775590    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:10:14.775600    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:10:14.790182    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:10:14.790195    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:10:14.825325    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:10:14.825338    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:10:14.838831    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:10:14.838840    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:10:17.351278    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:17.809855    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:17.809901    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:22.353618    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:22.353863    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:10:22.382173    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:10:22.382301    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:10:22.399620    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:10:22.399704    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:10:22.413905    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:10:22.413968    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:10:22.425003    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:10:22.425079    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:10:22.435747    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:10:22.435812    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:10:22.447114    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:10:22.447185    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:10:22.457754    3510 logs.go:276] 0 containers: []
	W0213 15:10:22.457767    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:10:22.457823    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:10:22.468175    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:10:22.468190    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:10:22.468195    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:10:22.493568    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:10:22.493580    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:10:22.529780    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:10:22.529792    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:10:22.556520    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:10:22.556532    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:10:22.570575    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:10:22.570585    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:10:22.582482    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:10:22.582493    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:10:22.594573    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:10:22.594585    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:10:22.607259    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:10:22.607270    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:10:22.621245    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:10:22.621256    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:10:22.636312    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:10:22.636322    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:10:22.648087    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:10:22.648098    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:10:22.666675    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:10:22.666687    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:10:22.679195    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:10:22.679207    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:10:22.694251    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:10:22.694260    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:10:22.698248    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:10:22.698255    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:10:22.716473    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:10:22.716483    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:10:22.733653    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:10:22.733665    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:10:22.810731    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:22.810747    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:25.247221    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:27.811516    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:27.811569    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:30.249477    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:30.249751    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:10:30.284340    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:10:30.284474    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:10:30.304167    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:10:30.304272    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:10:30.319146    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:10:30.319217    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:10:30.331057    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:10:30.331126    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:10:30.341897    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:10:30.341966    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:10:30.351989    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:10:30.352064    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:10:30.362580    3510 logs.go:276] 0 containers: []
	W0213 15:10:30.362597    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:10:30.362654    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:10:30.372682    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:10:30.372699    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:10:30.372705    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:10:30.390446    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:10:30.390458    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:10:30.413672    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:10:30.413692    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:10:30.424907    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:10:30.424917    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:10:30.436500    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:10:30.436511    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:10:30.450164    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:10:30.450173    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:10:30.464518    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:10:30.464527    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:10:30.500095    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:10:30.500106    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:10:30.514938    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:10:30.514949    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:10:30.563289    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:10:30.563301    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:10:30.575222    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:10:30.575232    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:10:30.587824    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:10:30.587835    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:10:30.592288    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:10:30.592294    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:10:30.618857    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:10:30.618867    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:10:30.632715    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:10:30.632726    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:10:30.644358    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:10:30.644371    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:10:30.656822    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:10:30.656833    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:10:33.172759    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:32.812726    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:32.812751    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:38.174845    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:38.175100    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:10:38.202037    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:10:38.202149    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:10:38.224320    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:10:38.224413    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:10:38.240649    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:10:38.240716    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:10:38.254651    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:10:38.254722    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:10:38.265614    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:10:38.265667    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:10:38.276250    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:10:38.276325    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:10:38.287043    3510 logs.go:276] 0 containers: []
	W0213 15:10:38.287055    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:10:38.287098    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:10:38.300419    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:10:38.300437    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:10:38.300442    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:10:38.312712    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:10:38.312724    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:10:38.327203    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:10:38.327214    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:10:38.338427    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:10:38.338440    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:10:38.362839    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:10:38.362851    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:10:38.377319    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:10:38.377332    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:10:38.381582    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:10:38.381589    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:10:38.406304    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:10:38.406318    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:10:38.419931    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:10:38.419944    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:10:38.433911    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:10:38.433922    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:10:38.452030    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:10:38.452040    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:10:38.470385    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:10:38.470398    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:10:38.484649    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:10:38.484660    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:10:37.814073    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:37.814095    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:38.522040    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:10:38.522051    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:10:38.537050    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:10:38.537064    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:10:38.550555    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:10:38.550567    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:10:38.565888    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:10:38.565902    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:10:41.080061    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:42.815909    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:42.815930    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:46.082011    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:46.082258    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:10:46.106531    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:10:46.106628    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:10:46.122619    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:10:46.122714    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:10:46.135774    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:10:46.135840    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:10:46.147738    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:10:46.147804    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:10:46.158449    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:10:46.158516    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:10:46.168768    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:10:46.168837    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:10:46.179440    3510 logs.go:276] 0 containers: []
	W0213 15:10:46.179450    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:10:46.179502    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:10:46.194676    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:10:46.194691    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:10:46.194697    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:10:46.209188    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:10:46.209195    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:10:46.221093    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:10:46.221104    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:10:46.225268    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:10:46.225274    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:10:46.242112    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:10:46.242122    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:10:46.253582    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:10:46.253593    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:10:46.265402    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:10:46.265414    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:10:46.279133    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:10:46.279144    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:10:46.290379    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:10:46.290394    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:10:46.310060    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:10:46.310071    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:10:46.332655    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:10:46.332664    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:10:46.346017    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:10:46.346028    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:10:46.366216    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:10:46.366228    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:10:46.400905    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:10:46.400917    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:10:46.426109    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:10:46.426118    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:10:46.440378    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:10:46.440389    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:10:46.455649    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:10:46.455658    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:10:47.817986    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:47.818006    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:48.969489    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:52.818894    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:52.819013    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:10:52.833129    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:10:52.833215    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:10:52.844557    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:10:52.844640    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:10:52.855198    3378 logs.go:276] 2 containers: [6e5977a9cc40 d447b53b1dd0]
	I0213 15:10:52.855262    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:10:52.866218    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:10:52.866279    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:10:52.876849    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:10:52.876917    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:10:52.887512    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:10:52.887576    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:10:52.897822    3378 logs.go:276] 0 containers: []
	W0213 15:10:52.897832    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:10:52.897884    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:10:52.907925    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:10:52.907944    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:10:52.907949    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:10:52.925536    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:10:52.925546    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:10:52.942477    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:10:52.942490    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:10:52.966424    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:10:52.966433    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:10:53.001118    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:10:53.001128    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:10:53.038370    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:10:53.038383    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:10:53.050160    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:10:53.050170    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:10:53.065372    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:10:53.065382    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:10:53.076468    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:10:53.076478    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:10:53.091248    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:10:53.091259    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:10:53.105884    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:10:53.105894    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:10:53.110433    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:10:53.110440    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:10:53.122567    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:10:53.122578    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:10:55.636182    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:53.971596    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:53.971742    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:10:53.985648    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:10:53.985725    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:10:53.996813    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:10:53.996876    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:10:54.007945    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:10:54.008003    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:10:54.018447    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:10:54.018513    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:10:54.028819    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:10:54.028880    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:10:54.039730    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:10:54.039797    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:10:54.050393    3510 logs.go:276] 0 containers: []
	W0213 15:10:54.050402    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:10:54.050452    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:10:54.060422    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:10:54.060437    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:10:54.060476    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:10:54.076851    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:10:54.076863    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:10:54.088384    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:10:54.088395    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:10:54.113386    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:10:54.113399    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:10:54.129346    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:10:54.129358    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:10:54.140793    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:10:54.140803    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:10:54.157625    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:10:54.157636    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:10:54.172913    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:10:54.172925    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:10:54.177276    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:10:54.177283    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:10:54.215038    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:10:54.215049    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:10:54.228668    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:10:54.228678    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:10:54.252466    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:10:54.252473    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:10:54.267531    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:10:54.267542    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:10:54.279657    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:10:54.279668    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:10:54.293252    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:10:54.293262    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:10:54.308987    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:10:54.308998    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:10:54.320580    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:10:54.320591    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:10:56.834455    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:00.638433    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:00.638656    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:00.655853    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:11:00.655936    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:00.675718    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:11:00.675785    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:00.686043    3378 logs.go:276] 2 containers: [6e5977a9cc40 d447b53b1dd0]
	I0213 15:11:00.686114    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:00.724444    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:11:00.724504    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:00.735094    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:11:00.735160    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:00.746148    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:11:00.746210    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:00.757204    3378 logs.go:276] 0 containers: []
	W0213 15:11:00.757216    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:00.757272    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:00.767665    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:11:00.767680    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:11:00.767684    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:11:00.780732    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:11:00.780743    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:11:00.793168    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:11:00.793180    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:11:00.804458    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:11:00.804468    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:11:00.821097    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:00.821113    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:00.860060    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:11:00.860071    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:11:00.872327    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:00.872338    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:00.896411    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:00.896422    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:00.900731    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:11:00.900738    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:11:00.915519    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:11:00.915529    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:11:00.930921    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:11:00.930930    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:11:00.948724    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:11:00.948736    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:00.960672    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:00.960682    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:01.836793    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:01.837018    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:01.862149    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:11:01.862241    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:01.880672    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:11:01.880763    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:01.893605    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:11:01.893674    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:01.904145    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:11:01.904220    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:01.926183    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:11:01.926247    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:01.936543    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:11:01.936610    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:01.950939    3510 logs.go:276] 0 containers: []
	W0213 15:11:01.950951    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:01.951009    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:01.961590    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:11:01.961605    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:11:01.961610    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:11:01.975278    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:11:01.975289    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:11:01.987406    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:11:01.987417    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:01.999388    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:11:01.999400    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:11:02.024203    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:11:02.024213    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:11:02.037733    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:11:02.037743    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:11:02.051679    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:11:02.051689    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:11:02.064062    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:02.064074    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:02.068635    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:02.068642    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:02.103488    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:02.103499    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:02.127872    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:11:02.127879    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:11:02.143136    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:11:02.143151    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:11:02.154777    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:11:02.154787    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:11:02.166642    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:02.166653    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:02.181761    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:11:02.181770    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:11:02.195289    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:11:02.195300    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:11:02.206041    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:11:02.206052    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:11:03.497266    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:04.725450    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:08.499474    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:08.499704    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:08.523751    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:11:08.523871    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:08.542374    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:11:08.542459    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:08.555562    3378 logs.go:276] 2 containers: [6e5977a9cc40 d447b53b1dd0]
	I0213 15:11:08.555639    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:08.566132    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:11:08.566198    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:08.576158    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:11:08.576224    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:08.586482    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:11:08.586549    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:08.596398    3378 logs.go:276] 0 containers: []
	W0213 15:11:08.596407    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:08.596478    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:08.607598    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:11:08.607617    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:08.607622    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:08.644238    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:11:08.644248    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:11:08.658331    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:11:08.658341    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:11:08.670090    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:11:08.670102    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:11:08.687454    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:11:08.687465    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:11:08.700157    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:08.700169    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:08.704474    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:08.704481    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:08.738625    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:11:08.738637    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:11:08.752272    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:11:08.752283    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:08.763441    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:11:08.763454    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:11:08.775896    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:11:08.775907    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:11:08.789893    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:11:08.789904    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:11:08.805137    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:08.805146    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:11.331444    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:09.727754    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:09.727975    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:09.750519    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:11:09.750621    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:09.767659    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:11:09.767733    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:09.782999    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:11:09.783062    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:09.793656    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:11:09.793719    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:09.804289    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:11:09.804356    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:09.814976    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:11:09.815034    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:09.829486    3510 logs.go:276] 0 containers: []
	W0213 15:11:09.829499    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:09.829557    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:09.840214    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:11:09.840232    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:11:09.840237    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:11:09.857617    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:11:09.857627    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:09.869161    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:09.869172    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:09.873123    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:11:09.873129    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:11:09.886974    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:11:09.886986    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:11:09.898330    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:11:09.898341    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:11:09.918705    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:11:09.918715    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:11:09.930324    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:11:09.930336    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:11:09.942694    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:11:09.942705    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:11:09.953674    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:09.953686    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:09.990287    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:11:09.990297    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:11:10.015635    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:11:10.015646    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:11:10.029923    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:11:10.029934    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:11:10.045082    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:10.045093    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:10.059701    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:11:10.059713    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:11:10.074795    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:10.074803    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:10.097231    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:11:10.097238    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:11:12.612913    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:16.333652    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:16.333973    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:16.364431    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:11:16.364570    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:16.382755    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:11:16.382849    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:16.396374    3378 logs.go:276] 2 containers: [6e5977a9cc40 d447b53b1dd0]
	I0213 15:11:16.396450    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:16.408271    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:11:16.408344    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:16.418950    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:11:16.419028    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:16.429806    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:11:16.429874    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:16.439829    3378 logs.go:276] 0 containers: []
	W0213 15:11:16.439843    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:16.439901    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:16.450255    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:11:16.450271    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:16.450276    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:16.485565    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:11:16.485573    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:11:16.497435    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:11:16.497446    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:11:16.512891    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:16.512901    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:16.518000    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:11:16.518007    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:11:16.535167    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:11:16.535177    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:11:16.546993    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:11:16.547005    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:11:16.561202    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:11:16.561213    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:11:16.573347    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:11:16.573360    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:11:16.597402    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:11:16.597412    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:16.609417    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:16.609427    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:16.645196    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:11:16.645206    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:11:16.657250    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:16.657260    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:17.615298    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:17.615871    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:17.646828    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:11:17.646929    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:17.664150    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:11:17.664243    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:17.677879    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:11:17.677955    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:17.689792    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:11:17.689865    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:17.700455    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:11:17.700527    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:17.711672    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:11:17.711746    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:17.721862    3510 logs.go:276] 0 containers: []
	W0213 15:11:17.721873    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:17.721935    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:17.732414    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:11:17.732429    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:11:17.732435    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:11:17.752964    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:17.752975    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:17.767864    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:17.767874    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:17.772398    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:11:17.772408    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:11:17.789299    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:11:17.789310    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:11:17.800588    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:11:17.800600    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:11:17.821535    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:11:17.821548    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:11:17.833230    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:17.833242    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:17.856962    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:17.856970    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:17.891478    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:11:17.891492    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:11:17.916920    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:11:17.916931    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:11:17.928812    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:11:17.928826    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:11:17.942535    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:11:17.942547    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:11:17.954691    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:11:17.954704    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:17.967723    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:11:17.967733    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:11:17.985467    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:11:17.985477    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:11:17.996897    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:11:17.996907    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:11:19.184336    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:20.516034    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:24.186539    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:24.186775    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:24.201512    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:11:24.201599    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:24.213764    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:11:24.213830    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:24.227944    3378 logs.go:276] 2 containers: [6e5977a9cc40 d447b53b1dd0]
	I0213 15:11:24.228005    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:24.238889    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:11:24.238966    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:24.249355    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:11:24.249426    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:24.263853    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:11:24.263929    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:24.274852    3378 logs.go:276] 0 containers: []
	W0213 15:11:24.274863    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:24.274920    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:24.287126    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:11:24.287145    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:11:24.287150    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:11:24.299087    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:11:24.299101    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:11:24.311064    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:24.311074    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:24.347103    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:11:24.347112    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:11:24.360781    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:11:24.360791    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:11:24.372580    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:11:24.372592    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:11:24.386389    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:11:24.386399    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:11:24.404557    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:24.404567    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:24.409405    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:11:24.409414    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:11:24.424395    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:11:24.424403    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:24.436248    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:24.436258    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:24.471006    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:11:24.471023    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:11:24.489083    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:24.489093    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:27.015802    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:25.518470    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:25.518655    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:25.536995    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:11:25.537097    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:25.551137    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:11:25.551212    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:25.563150    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:11:25.563219    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:25.573511    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:11:25.573587    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:25.583876    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:11:25.583943    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:25.594611    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:11:25.594684    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:25.605951    3510 logs.go:276] 0 containers: []
	W0213 15:11:25.605967    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:25.606027    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:25.621146    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:11:25.621163    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:11:25.621169    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:11:25.632667    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:25.632678    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:25.636940    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:11:25.636948    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:11:25.647916    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:11:25.647929    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:11:25.659597    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:11:25.659608    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:25.673334    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:11:25.673345    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:11:25.687891    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:11:25.687901    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:11:25.703456    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:11:25.703467    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:11:25.717674    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:11:25.717688    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:11:25.733170    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:11:25.733181    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:11:25.744972    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:25.744982    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:25.772484    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:25.772499    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:25.808341    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:11:25.808354    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:11:25.834888    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:11:25.834898    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:11:25.848489    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:25.848499    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:25.863211    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:11:25.863221    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:11:25.875345    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:11:25.875355    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:11:28.394824    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:32.018506    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:32.018881    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:32.050314    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:11:32.050445    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:32.068455    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:11:32.068556    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:32.082520    3378 logs.go:276] 2 containers: [6e5977a9cc40 d447b53b1dd0]
	I0213 15:11:32.082601    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:32.094316    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:11:32.094381    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:32.104836    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:11:32.104904    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:32.115858    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:11:32.115935    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:32.129694    3378 logs.go:276] 0 containers: []
	W0213 15:11:32.129705    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:32.129764    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:32.140806    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:11:32.140832    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:11:32.140837    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:11:32.152993    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:11:32.153004    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:11:32.165025    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:11:32.165035    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:11:32.182572    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:11:32.182584    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:11:32.197239    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:11:32.197249    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:11:32.208314    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:11:32.208326    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:32.220645    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:32.220654    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:32.256737    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:11:32.256748    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:11:32.271820    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:32.271830    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:32.295227    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:32.295236    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:32.299398    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:32.299408    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:32.332618    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:11:32.332628    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:11:32.346382    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:11:32.346392    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:11:33.396320    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:33.396482    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:33.408274    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:11:33.408352    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:33.418928    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:11:33.419001    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:33.429015    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:11:33.429082    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:33.441968    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:11:33.442039    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:33.452616    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:11:33.452695    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:33.463058    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:11:33.463125    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:33.473753    3510 logs.go:276] 0 containers: []
	W0213 15:11:33.473762    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:33.473821    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:33.484158    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:11:33.484173    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:33.484179    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:34.863168    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:33.499448    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:33.499456    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:33.534802    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:11:33.534813    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:11:33.546628    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:11:33.546639    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:11:33.564656    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:11:33.564665    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:11:33.579936    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:11:33.579947    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:11:33.592312    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:33.592321    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:33.596771    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:11:33.596780    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:11:33.611675    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:11:33.611687    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:11:33.629527    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:11:33.629539    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:11:33.643286    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:11:33.643297    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:11:33.655910    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:11:33.655921    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:33.668424    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:11:33.668435    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:11:33.693769    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:11:33.693780    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:11:33.708061    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:11:33.708071    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:11:33.719392    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:11:33.719403    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:11:33.734899    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:33.734910    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:36.258619    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:39.865315    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:39.865531    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:39.885181    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:11:39.885283    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:39.899616    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:11:39.899687    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:39.912118    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:11:39.912192    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:39.923199    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:11:39.923272    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:39.933519    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:11:39.933582    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:39.944104    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:11:39.944176    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:39.955085    3378 logs.go:276] 0 containers: []
	W0213 15:11:39.955095    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:39.955150    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:39.967536    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:11:39.967559    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:39.967564    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:40.003692    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:11:40.003703    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:11:40.016042    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:11:40.016052    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:11:40.027312    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:11:40.027322    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:11:40.039937    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:11:40.039948    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:11:40.056072    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:40.056082    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:40.060893    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:11:40.060901    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:11:40.074839    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:11:40.074852    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:11:40.086848    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:40.086860    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:40.122804    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:11:40.122813    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:11:40.136935    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:11:40.136947    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:11:40.148990    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:11:40.148999    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:11:40.168001    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:40.168010    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:40.192750    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:11:40.192758    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:40.204720    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:11:40.204731    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:11:41.260816    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:41.260971    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:41.274627    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:11:41.274716    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:41.286477    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:11:41.286548    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:41.296999    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:11:41.297071    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:41.307618    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:11:41.307692    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:41.317631    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:11:41.317701    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:41.328089    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:11:41.328170    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:41.338513    3510 logs.go:276] 0 containers: []
	W0213 15:11:41.338523    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:41.338575    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:41.357857    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:11:41.357872    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:41.357877    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:41.361797    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:11:41.361803    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:11:41.373244    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:11:41.373254    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:11:41.388794    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:11:41.388805    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:11:41.406608    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:11:41.406619    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:11:41.420534    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:41.420544    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:41.444061    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:41.444068    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:41.458950    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:11:41.458959    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:11:41.473248    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:11:41.473257    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:11:41.487212    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:11:41.487221    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:11:41.498997    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:41.499007    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:41.539969    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:11:41.539980    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:11:41.567561    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:11:41.567572    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:11:41.581797    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:11:41.581809    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:11:41.592820    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:11:41.592831    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:11:41.605738    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:11:41.605748    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:11:41.619217    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:11:41.619231    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:42.720025    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:44.133371    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:47.722153    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:47.722298    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:47.734561    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:11:47.734643    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:47.751283    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:11:47.751364    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:47.761768    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:11:47.761853    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:47.782066    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:11:47.782133    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:47.792884    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:11:47.792955    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:47.803767    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:11:47.803835    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:47.814476    3378 logs.go:276] 0 containers: []
	W0213 15:11:47.814489    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:47.814546    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:47.833013    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:11:47.833029    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:47.833036    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:47.837659    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:11:47.837665    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:11:47.849333    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:11:47.849342    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:11:47.868041    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:11:47.868053    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:47.880104    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:47.880115    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:47.917415    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:11:47.917426    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:11:47.931317    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:11:47.931328    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:11:47.944984    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:11:47.944996    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:11:47.956723    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:11:47.956733    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:11:47.968763    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:47.968773    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:48.003062    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:11:48.003073    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:11:48.015209    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:11:48.015222    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:11:48.027025    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:11:48.027037    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:11:48.043166    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:48.043178    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:48.067839    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:11:48.067847    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:11:50.584090    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:49.134670    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:49.135082    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:49.178019    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:11:49.178180    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:49.196024    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:11:49.196154    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:49.213893    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:11:49.213975    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:49.227951    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:11:49.228036    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:49.256058    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:11:49.256148    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:49.272098    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:11:49.272191    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:49.290731    3510 logs.go:276] 0 containers: []
	W0213 15:11:49.290747    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:49.290823    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:49.305997    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:11:49.306017    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:49.306025    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:49.339888    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:11:49.339900    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:11:49.358981    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:11:49.358993    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:11:49.374299    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:11:49.374310    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:11:49.391789    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:11:49.391800    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:11:49.407134    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:11:49.407147    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:11:49.418989    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:11:49.419000    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:11:49.431360    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:49.431373    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:49.446867    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:11:49.446881    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:11:49.460786    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:11:49.460797    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:11:49.471892    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:11:49.471904    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:11:49.485723    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:11:49.485732    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:11:49.497007    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:49.497022    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:49.500902    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:11:49.500910    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:11:49.526416    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:11:49.526426    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:11:49.540227    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:49.540236    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:49.562873    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:11:49.562880    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:52.077660    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:55.586560    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:55.587011    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:55.628268    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:11:55.628439    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:55.649896    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:11:55.650000    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:55.664936    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:11:55.665012    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:55.677211    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:11:55.677276    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:55.688422    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:11:55.688506    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:55.700834    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:11:55.700898    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:55.713887    3378 logs.go:276] 0 containers: []
	W0213 15:11:55.713901    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:55.713964    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:55.724173    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:11:55.724189    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:11:55.724194    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:11:55.740668    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:11:55.740679    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:11:55.752212    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:11:55.752222    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:11:55.765480    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:11:55.765491    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:11:55.777790    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:11:55.777800    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:11:55.795587    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:55.795597    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:55.829816    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:11:55.829830    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:11:55.844058    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:55.844077    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:55.868328    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:11:55.868338    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:55.880002    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:55.880012    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:55.915570    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:55.915582    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:55.924270    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:11:55.924281    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:11:55.941555    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:11:55.941567    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:11:55.953792    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:11:55.953804    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:11:55.969477    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:11:55.969488    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:11:57.080135    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:57.080357    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:57.111572    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:11:57.111698    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:57.129830    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:11:57.129940    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:57.149729    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:11:57.149815    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:57.160653    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:11:57.160725    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:57.171855    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:11:57.171916    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:57.182353    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:11:57.182432    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:57.192280    3510 logs.go:276] 0 containers: []
	W0213 15:11:57.192294    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:57.192356    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:57.208356    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:11:57.208374    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:11:57.208380    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:11:57.222766    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:11:57.222777    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:11:57.239112    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:57.239123    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:57.261500    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:11:57.261509    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:11:57.275373    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:11:57.275383    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:11:57.286140    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:11:57.286150    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:11:57.297354    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:11:57.297364    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:11:57.308762    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:11:57.308774    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:57.320115    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:57.320125    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:57.324843    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:11:57.324851    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:11:57.354877    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:11:57.354888    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:11:57.370134    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:11:57.370144    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:11:57.392549    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:11:57.392560    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:11:57.406367    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:57.406378    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:57.420920    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:57.420927    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:57.455140    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:11:57.455154    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:11:57.469466    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:11:57.469478    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:11:58.483175    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:59.983635    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:12:03.485372    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:12:03.485489    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:12:03.497366    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:12:03.497445    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:12:03.507687    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:12:03.507750    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:12:03.518837    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:12:03.518907    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:12:03.537390    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:12:03.537450    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:12:03.549342    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:12:03.549414    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:12:03.560189    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:12:03.560263    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:12:03.569858    3378 logs.go:276] 0 containers: []
	W0213 15:12:03.569867    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:12:03.569921    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:12:03.580476    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:12:03.580491    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:12:03.580496    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:12:03.615376    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:12:03.615388    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:12:03.630842    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:12:03.630852    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:12:03.645152    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:12:03.645161    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:12:03.661500    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:12:03.661510    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:12:03.680690    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:12:03.680700    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:12:03.705506    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:12:03.705517    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:12:03.717126    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:12:03.717137    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:12:03.721435    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:12:03.721442    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:12:03.760369    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:12:03.760380    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:12:03.772663    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:12:03.772687    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:12:03.784866    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:12:03.784878    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:12:03.797145    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:12:03.797156    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:12:03.814345    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:12:03.814355    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:12:03.826441    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:12:03.826453    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:12:06.339857    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:12:04.985852    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:12:04.986006    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:12:05.000908    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:12:05.000992    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:12:05.011875    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:12:05.011942    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:12:05.022793    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:12:05.022857    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:12:05.032986    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:12:05.033055    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:12:05.043293    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:12:05.043359    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:12:05.054041    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:12:05.054117    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:12:05.063996    3510 logs.go:276] 0 containers: []
	W0213 15:12:05.064007    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:12:05.064070    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:12:05.073931    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:12:05.073946    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:12:05.073951    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:12:05.077994    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:12:05.077999    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:12:05.089394    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:12:05.089405    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:12:05.100454    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:12:05.100463    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:12:05.111933    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:12:05.111947    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:12:05.123582    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:12:05.123591    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:12:05.138735    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:12:05.138743    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:12:05.152532    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:12:05.152543    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:12:05.170674    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:12:05.170684    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:12:05.184265    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:12:05.184274    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:12:05.219068    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:12:05.219082    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:12:05.245370    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:12:05.245381    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:12:05.259831    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:12:05.259842    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:12:05.271662    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:12:05.271676    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:12:05.288045    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:12:05.288055    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:12:05.299653    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:12:05.299664    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:12:05.315118    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:12:05.315128    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:12:07.839600    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:12:11.341886    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:12:11.342071    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:12:11.362108    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:12:11.362183    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:12:11.375325    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:12:11.375400    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:12:11.386908    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:12:11.386970    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:12:11.397592    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:12:11.397664    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:12:11.408017    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:12:11.408089    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:12:11.419320    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:12:11.419387    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:12:11.429721    3378 logs.go:276] 0 containers: []
	W0213 15:12:11.429731    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:12:11.429789    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:12:11.441053    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:12:11.441073    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:12:11.441079    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:12:11.455231    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:12:11.455242    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:12:11.467396    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:12:11.467407    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:12:11.480100    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:12:11.480111    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:12:11.491180    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:12:11.491190    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:12:11.526647    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:12:11.526656    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:12:11.540716    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:12:11.540726    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:12:11.572306    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:12:11.572319    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:12:11.593977    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:12:11.593992    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:12:11.629185    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:12:11.629196    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:12:11.647422    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:12:11.647434    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:12:11.662017    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:12:11.662028    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:12:11.680791    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:12:11.680801    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:12:11.693607    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:12:11.693617    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:12:11.698354    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:12:11.698362    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:12:12.841747    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:12:12.841889    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:12:12.854264    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:12:12.854332    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:12:12.864914    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:12:12.864972    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:12:12.875450    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:12:12.875524    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:12:12.885834    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:12:12.885899    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:12:12.895778    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:12:12.895846    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:12:12.906274    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:12:12.906351    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:12:12.915851    3510 logs.go:276] 0 containers: []
	W0213 15:12:12.915862    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:12:12.915921    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:12:12.926325    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:12:12.926340    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:12:12.926345    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:12:12.930523    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:12:12.930530    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:12:12.941861    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:12:12.941871    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:12:12.956969    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:12:12.956978    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:12:12.968358    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:12:12.968369    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:12:12.982240    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:12:12.982250    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:12:13.007455    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:12:13.007465    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:12:13.021580    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:12:13.021590    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:12:13.036066    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:12:13.036076    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:12:13.048072    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:12:13.048083    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:12:13.063590    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:12:13.063599    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:12:13.078363    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:12:13.078369    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:12:13.112519    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:12:13.112529    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:12:13.124299    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:12:13.124309    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:12:13.141634    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:12:13.141645    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:12:13.153932    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:12:13.153942    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:12:13.166132    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:12:13.166143    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:12:14.226086    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:12:15.688752    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:12:19.227478    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:12:19.227686    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:12:19.249164    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:12:19.249259    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:12:19.263541    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:12:19.263622    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:12:19.276839    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:12:19.276905    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:12:19.287909    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:12:19.287980    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:12:19.298278    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:12:19.298347    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:12:19.309276    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:12:19.309345    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:12:19.318716    3378 logs.go:276] 0 containers: []
	W0213 15:12:19.318727    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:12:19.318783    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:12:19.336451    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:12:19.336467    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:12:19.336472    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:12:19.350328    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:12:19.350339    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:12:19.362856    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:12:19.362868    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:12:19.381161    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:12:19.381172    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:12:19.395832    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:12:19.395842    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:12:19.407804    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:12:19.407815    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:12:19.420566    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:12:19.420576    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:12:19.432410    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:12:19.432420    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:12:19.448537    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:12:19.448547    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:12:19.472394    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:12:19.472401    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:12:19.476742    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:12:19.476752    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:12:19.489600    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:12:19.489610    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:12:19.501614    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:12:19.501627    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:12:19.538554    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:12:19.538568    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:12:19.574154    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:12:19.574168    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:12:22.087944    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:12:20.690898    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:12:20.691072    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:12:20.704729    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:12:20.704812    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:12:20.715749    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:12:20.715824    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:12:20.726203    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:12:20.726278    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:12:20.736202    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:12:20.736274    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:12:20.747050    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:12:20.747122    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:12:20.757702    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:12:20.757771    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:12:20.768207    3510 logs.go:276] 0 containers: []
	W0213 15:12:20.768217    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:12:20.768270    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:12:20.778618    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:12:20.778633    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:12:20.778638    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:12:20.793246    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:12:20.793258    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:12:20.807376    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:12:20.807387    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:12:20.823071    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:12:20.823080    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:12:20.837935    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:12:20.837946    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:12:20.849666    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:12:20.849677    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:12:20.861055    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:12:20.861066    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:12:20.878386    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:12:20.878396    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:12:20.913335    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:12:20.913346    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:12:20.924286    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:12:20.924295    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:12:20.935102    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:12:20.935111    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:12:20.946825    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:12:20.946836    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:12:20.959354    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:12:20.959366    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:12:20.988922    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:12:20.988936    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:12:21.003839    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:12:21.003850    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:12:21.025893    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:12:21.025900    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:12:21.030274    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:12:21.030284    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:12:27.090297    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:12:27.090756    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:12:27.138646    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:12:27.138774    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:12:27.157945    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:12:27.158048    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:12:27.172727    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:12:27.172812    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:12:27.184690    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:12:27.184754    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:12:27.194937    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:12:27.195009    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:12:27.206184    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:12:27.206257    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:12:27.216782    3378 logs.go:276] 0 containers: []
	W0213 15:12:27.216792    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:12:27.216863    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:12:27.231903    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:12:27.231921    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:12:27.231926    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:12:27.249725    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:12:27.249734    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:12:27.274590    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:12:27.274598    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:12:27.310037    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:12:27.310050    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:12:27.322523    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:12:27.322533    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:12:27.337166    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:12:27.337178    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:12:27.349472    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:12:27.349486    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:12:27.361908    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:12:27.361920    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:12:27.377680    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:12:27.377690    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:12:27.389569    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:12:27.389579    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:12:27.425670    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:12:27.425678    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:12:27.440241    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:12:27.440250    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:12:27.452356    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:12:27.452366    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:12:23.545112    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:12:28.547322    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:12:28.547402    3510 kubeadm.go:640] restartCluster took 4m3.551788833s
	W0213 15:12:28.547470    3510 out.go:239] ! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
	I0213 15:12:28.547497    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0213 15:12:29.575667    3510 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.028179833s)
	I0213 15:12:29.576601    3510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 15:12:29.581405    3510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 15:12:29.584097    3510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 15:12:29.586935    3510 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 15:12:29.586947    3510 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 15:12:29.604915    3510 kubeadm.go:322] [init] Using Kubernetes version: v1.24.1
	I0213 15:12:29.604943    3510 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 15:12:29.660556    3510 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 15:12:29.660646    3510 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 15:12:29.660698    3510 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 15:12:29.710805    3510 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 15:12:29.716021    3510 out.go:204]   - Generating certificates and keys ...
	I0213 15:12:29.716057    3510 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 15:12:29.716088    3510 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 15:12:29.716142    3510 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 15:12:29.716177    3510 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 15:12:29.716227    3510 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 15:12:29.716261    3510 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 15:12:29.716291    3510 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 15:12:29.716318    3510 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 15:12:29.716399    3510 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 15:12:29.716431    3510 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 15:12:29.716450    3510 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 15:12:29.716487    3510 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 15:12:30.006392    3510 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 15:12:30.108294    3510 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 15:12:30.292606    3510 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 15:12:30.376294    3510 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 15:12:30.406590    3510 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 15:12:30.406912    3510 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 15:12:30.406950    3510 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 15:12:30.474911    3510 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 15:12:27.463613    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:12:27.463622    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:12:27.468045    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:12:27.468054    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:12:29.986305    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:12:30.478845    3510 out.go:204]   - Booting up control plane ...
	I0213 15:12:30.478885    3510 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 15:12:30.478918    3510 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 15:12:30.478954    3510 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 15:12:30.478990    3510 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 15:12:30.479057    3510 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 15:12:34.976958    3510 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.501476 seconds
	I0213 15:12:34.977055    3510 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 15:12:34.984591    3510 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 15:12:35.493260    3510 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 15:12:35.493368    3510 kubeadm.go:322] [mark-control-plane] Marking the node stopped-upgrade-809000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 15:12:35.997459    3510 kubeadm.go:322] [bootstrap-token] Using token: moy1oe.z4h4igdjgcnkbsan
	I0213 15:12:36.003757    3510 out.go:204]   - Configuring RBAC rules ...
	I0213 15:12:36.003813    3510 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 15:12:36.003863    3510 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 15:12:36.007487    3510 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 15:12:36.008186    3510 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 15:12:36.009125    3510 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 15:12:36.010057    3510 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 15:12:36.013355    3510 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 15:12:36.166082    3510 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 15:12:36.401294    3510 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 15:12:36.401834    3510 kubeadm.go:322] 
	I0213 15:12:36.401867    3510 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 15:12:36.401870    3510 kubeadm.go:322] 
	I0213 15:12:36.401909    3510 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 15:12:36.401915    3510 kubeadm.go:322] 
	I0213 15:12:36.401926    3510 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 15:12:36.401964    3510 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 15:12:36.401997    3510 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 15:12:36.402000    3510 kubeadm.go:322] 
	I0213 15:12:36.402030    3510 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 15:12:36.402034    3510 kubeadm.go:322] 
	I0213 15:12:36.402065    3510 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 15:12:36.402071    3510 kubeadm.go:322] 
	I0213 15:12:36.402098    3510 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 15:12:36.402138    3510 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 15:12:36.402175    3510 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 15:12:36.402179    3510 kubeadm.go:322] 
	I0213 15:12:36.402222    3510 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 15:12:36.402271    3510 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 15:12:36.402275    3510 kubeadm.go:322] 
	I0213 15:12:36.402321    3510 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token moy1oe.z4h4igdjgcnkbsan \
	I0213 15:12:36.402374    3510 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59d33d1eb40ab9fa290fa08077c20062eca217e8d74d3cd8b9b4fd2d5d6aeb8d \
	I0213 15:12:36.402387    3510 kubeadm.go:322] 	--control-plane 
	I0213 15:12:36.402391    3510 kubeadm.go:322] 
	I0213 15:12:36.402442    3510 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 15:12:36.402446    3510 kubeadm.go:322] 
	I0213 15:12:36.402491    3510 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token moy1oe.z4h4igdjgcnkbsan \
	I0213 15:12:36.402545    3510 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59d33d1eb40ab9fa290fa08077c20062eca217e8d74d3cd8b9b4fd2d5d6aeb8d 
	I0213 15:12:36.402739    3510 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 15:12:36.402799    3510 cni.go:84] Creating CNI manager for ""
	I0213 15:12:36.402815    3510 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:12:36.405495    3510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 15:12:36.413609    3510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 15:12:36.416505    3510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 15:12:36.421473    3510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 15:12:36.421514    3510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 15:12:36.421522    3510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=fb52fe04bc8b044b129ef2ff27607d20a9fceb93 minikube.k8s.io/name=stopped-upgrade-809000 minikube.k8s.io/updated_at=2024_02_13T15_12_36_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 15:12:36.465318    3510 kubeadm.go:1088] duration metric: took 43.839584ms to wait for elevateKubeSystemPrivileges.
	I0213 15:12:36.465352    3510 ops.go:34] apiserver oom_adj: -16
	I0213 15:12:36.465366    3510 host.go:66] Checking if "stopped-upgrade-809000" exists ...
	I0213 15:12:36.466444    3510 main.go:141] libmachine: Using SSH client type: external
	I0213 15:12:36.466462    3510 main.go:141] libmachine: Using SSH private key: /Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/id_rsa (-rw-------)
	I0213 15:12:36.466478    3510 main.go:141] libmachine: &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/id_rsa -p 50309] /usr/bin/ssh <nil>}
	I0213 15:12:36.466490    3510 main.go:141] libmachine: /usr/bin/ssh -F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/id_rsa -p 50309 -f -NTL 50344:localhost:8443
	I0213 15:12:36.507538    3510 kubeadm.go:406] StartCluster complete in 4m11.567802208s
	I0213 15:12:36.507598    3510 settings.go:142] acquiring lock: {Name:mkdd6397441cfaf6d06a74b65d6ddefdb863237c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:12:36.507870    3510 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:12:36.508555    3510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/kubeconfig: {Name:mkf66d96abab1e512e6f2721c341e70e5b11c9ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:12:36.508905    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 15:12:36.508979    3510 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 15:12:36.509041    3510 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-809000"
	I0213 15:12:36.509055    3510 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-809000"
	W0213 15:12:36.509058    3510 addons.go:243] addon storage-provisioner should already be in state true
	I0213 15:12:36.509080    3510 host.go:66] Checking if "stopped-upgrade-809000" exists ...
	I0213 15:12:36.509093    3510 config.go:182] Loaded profile config "stopped-upgrade-809000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0213 15:12:36.509076    3510 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-809000"
	I0213 15:12:36.509159    3510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-809000"
	I0213 15:12:36.509276    3510 kapi.go:59] client config for stopped-upgrade-809000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/client.key", CAFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101777f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 15:12:36.513601    3510 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:12:34.988463    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:12:34.988576    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:12:35.007410    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:12:35.007486    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:12:35.017695    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:12:35.017768    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:12:35.028743    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:12:35.028815    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:12:35.039418    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:12:35.039488    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:12:35.049471    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:12:35.049549    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:12:35.060205    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:12:35.060271    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:12:35.074704    3378 logs.go:276] 0 containers: []
	W0213 15:12:35.074715    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:12:35.074775    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:12:35.085237    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:12:35.085251    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:12:35.085257    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:12:35.097175    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:12:35.097186    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:12:35.112719    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:12:35.112728    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:12:35.124773    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:12:35.124783    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:12:35.145296    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:12:35.145309    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:12:35.181559    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:12:35.181568    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:12:35.186214    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:12:35.186220    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:12:35.219467    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:12:35.219481    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:12:35.237875    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:12:35.237886    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:12:35.249497    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:12:35.249508    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:12:35.260886    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:12:35.260898    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:12:35.273007    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:12:35.273019    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:12:35.296656    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:12:35.296663    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:12:35.314063    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:12:35.314073    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:12:35.325725    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:12:35.325734    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:12:36.516543    3510 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 15:12:36.516549    3510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 15:12:36.516557    3510 sshutil.go:53] new ssh client: &{IP:localhost Port:50309 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/id_rsa Username:docker}
	I0213 15:12:36.517782    3510 kapi.go:59] client config for stopped-upgrade-809000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/client.key", CAFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101777f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 15:12:36.517908    3510 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-809000"
	W0213 15:12:36.517914    3510 addons.go:243] addon default-storageclass should already be in state true
	I0213 15:12:36.517925    3510 host.go:66] Checking if "stopped-upgrade-809000" exists ...
	I0213 15:12:36.518757    3510 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 15:12:36.518762    3510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 15:12:36.518769    3510 sshutil.go:53] new ssh client: &{IP:localhost Port:50309 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/id_rsa Username:docker}
	I0213 15:12:36.548394    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           10.0.2.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 15:12:36.563995    3510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 15:12:36.570596    3510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 15:12:37.010819    3510 start.go:929] {"host.minikube.internal": 10.0.2.2} host record injected into CoreDNS's ConfigMap
	I0213 15:12:37.839325    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:12:42.841520    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:12:42.841712    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:12:42.859271    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:12:42.859359    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:12:42.872608    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:12:42.872689    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:12:42.884732    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:12:42.884806    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:12:42.895645    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:12:42.895714    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:12:42.906335    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:12:42.906405    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:12:42.916672    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:12:42.916739    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:12:42.926935    3378 logs.go:276] 0 containers: []
	W0213 15:12:42.926947    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:12:42.927002    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:12:42.937434    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:12:42.937452    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:12:42.937458    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:12:42.960873    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:12:42.960881    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:12:42.965099    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:12:42.965108    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:12:42.979338    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:12:42.979353    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:12:43.005441    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:12:43.005450    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:12:43.017144    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:12:43.017155    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:12:43.052203    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:12:43.052213    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:12:43.065276    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:12:43.065286    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:12:43.077583    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:12:43.077593    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:12:43.089373    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:12:43.089382    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:12:43.104132    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:12:43.104142    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:12:43.120721    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:12:43.120733    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:12:43.133075    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:12:43.133086    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:12:43.169621    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:12:43.169630    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:12:43.188583    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:12:43.188593    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:12:45.702497    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:12:50.703533    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:12:50.703652    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:12:50.716543    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:12:50.716618    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:12:50.731599    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:12:50.731667    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:12:50.742636    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:12:50.742710    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:12:50.753570    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:12:50.753663    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:12:50.764033    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:12:50.764104    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:12:50.775841    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:12:50.775912    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:12:50.787232    3378 logs.go:276] 0 containers: []
	W0213 15:12:50.787243    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:12:50.787307    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:12:50.806549    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:12:50.806565    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:12:50.806571    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:12:50.811581    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:12:50.811591    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:12:50.826121    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:12:50.826136    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:12:50.838407    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:12:50.838419    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:12:50.865382    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:12:50.865399    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:12:50.903358    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:12:50.903380    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:12:50.917183    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:12:50.917195    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:12:50.930636    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:12:50.930647    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:12:50.943687    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:12:50.943705    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:12:50.962218    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:12:50.962235    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:12:50.975090    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:12:50.975103    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:12:50.990106    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:12:50.990115    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:12:51.002800    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:12:51.002809    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:12:51.039663    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:12:51.039674    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:12:51.054376    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:12:51.054387    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:12:53.567977    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:12:58.570181    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:12:58.570425    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:12:58.589220    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:12:58.589315    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:12:58.603977    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:12:58.604044    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:12:58.615973    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:12:58.616069    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:12:58.626704    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:12:58.626777    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:12:58.638262    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:12:58.638345    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:12:58.648657    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:12:58.648715    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:12:58.659162    3378 logs.go:276] 0 containers: []
	W0213 15:12:58.659175    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:12:58.659231    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:12:58.669558    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:12:58.669575    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:12:58.669580    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:12:58.693209    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:12:58.693219    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:12:58.727554    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:12:58.727564    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:12:58.739435    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:12:58.739444    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:12:58.751539    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:12:58.751549    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:12:58.771031    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:12:58.771045    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:12:58.785245    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:12:58.785256    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:12:58.797404    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:12:58.797414    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:12:58.810125    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:12:58.810138    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:12:58.822189    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:12:58.822202    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:12:58.826513    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:12:58.826522    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:12:58.838265    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:12:58.838276    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:12:58.873392    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:12:58.873402    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:12:58.887718    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:12:58.887727    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:12:58.899937    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:12:58.899946    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:13:01.418225    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0213 15:13:06.511172    3510 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "stopped-upgrade-809000" context to 1 replicas: non-retryable failure while getting "coredns" deployment scale: Get "https://10.0.2.15:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 10.0.2.15:8443: i/o timeout
	E0213 15:13:06.511184    3510 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://10.0.2.15:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 10.0.2.15:8443: i/o timeout
	I0213 15:13:06.511194    3510 start.go:223] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:13:06.515885    3510 out.go:177] * Verifying Kubernetes components...
	I0213 15:13:06.519871    3510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 15:13:06.525254    3510 api_server.go:52] waiting for apiserver process to appear ...
	I0213 15:13:06.525329    3510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 15:13:06.530202    3510 api_server.go:72] duration metric: took 18.994ms to wait for apiserver process to appear ...
	I0213 15:13:06.530214    3510 api_server.go:88] waiting for apiserver healthz status ...
	I0213 15:13:06.530223    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0213 15:13:07.017037    3510 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0213 15:13:07.021208    3510 out.go:177] * Enabled addons: storage-provisioner
	I0213 15:13:06.418735    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:06.418910    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:13:06.430342    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:13:06.430417    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:13:06.441045    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:13:06.441127    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:13:06.452358    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:13:06.452432    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:13:06.463456    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:13:06.463520    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:13:06.476327    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:13:06.476407    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:13:06.493715    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:13:06.493786    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:13:06.504021    3378 logs.go:276] 0 containers: []
	W0213 15:13:06.504033    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:13:06.504093    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:13:06.515046    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:13:06.515060    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:13:06.515066    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:13:06.527956    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:13:06.527966    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:13:06.541707    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:13:06.541718    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:13:06.553635    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:13:06.553649    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:13:06.558056    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:13:06.558066    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:13:06.591561    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:13:06.591575    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:13:06.606318    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:13:06.606331    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:13:06.618650    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:13:06.618661    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:13:06.636659    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:13:06.636669    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:13:06.647622    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:13:06.647633    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:13:06.662254    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:13:06.662265    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:13:06.674177    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:13:06.674187    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:13:06.690303    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:13:06.690313    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:13:06.731644    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:13:06.731656    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:13:06.748250    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:13:06.748261    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:13:07.029125    3510 addons.go:505] enable addons completed in 30.520804958s: enabled=[storage-provisioner]
	I0213 15:13:09.273202    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:11.532226    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:11.532244    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:14.275483    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:14.275738    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:13:14.302175    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:13:14.302303    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:13:14.320298    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:13:14.320372    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:13:14.333579    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:13:14.333653    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:13:14.344625    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:13:14.344694    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:13:14.354942    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:13:14.355005    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:13:14.365551    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:13:14.365615    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:13:14.375792    3378 logs.go:276] 0 containers: []
	W0213 15:13:14.375800    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:13:14.375865    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:13:14.386374    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:13:14.386389    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:13:14.386395    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:13:14.402129    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:13:14.402141    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:13:14.417398    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:13:14.417412    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:13:14.435439    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:13:14.435458    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:13:14.448726    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:13:14.448740    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:13:14.486355    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:13:14.486371    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:13:14.501592    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:13:14.501605    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:13:14.513890    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:13:14.513903    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:13:14.518234    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:13:14.518242    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:13:14.531879    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:13:14.531889    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:13:14.557492    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:13:14.557506    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:13:14.569901    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:13:14.569911    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:13:14.581462    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:13:14.581475    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:13:14.593643    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:13:14.593656    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:13:14.633412    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:13:14.633427    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:13:17.149448    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:16.532379    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:16.532403    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:22.151684    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:22.151902    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:13:22.179725    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:13:22.179811    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:13:22.192868    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:13:22.192944    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:13:22.204501    3378 logs.go:276] 4 containers: [8ae24fa2b68a 5966c105587d 6e5977a9cc40 d447b53b1dd0]
	I0213 15:13:22.204578    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:13:22.215673    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:13:22.215745    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:13:22.226655    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:13:22.226721    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:13:22.237369    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:13:22.237430    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:13:22.248250    3378 logs.go:276] 0 containers: []
	W0213 15:13:22.248260    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:13:22.248318    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:13:22.258678    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:13:22.258695    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:13:22.258702    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:13:22.270610    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:13:22.270623    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:13:22.282629    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:13:22.282640    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:13:22.305141    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:13:22.305150    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:13:22.316511    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:13:22.316522    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:13:22.330735    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:13:22.330748    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:13:22.344599    3378 logs.go:123] Gathering logs for coredns [d447b53b1dd0] ...
	I0213 15:13:22.344610    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d447b53b1dd0"
	I0213 15:13:22.360044    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:13:22.360055    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:13:22.375635    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:13:22.375644    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:13:22.411982    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:13:22.411992    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:13:22.424336    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:13:22.424347    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:13:22.442841    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:13:22.442856    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:13:22.452457    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:13:22.452467    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:13:21.532561    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:21.532591    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:22.487710    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:13:22.487721    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:13:22.500102    3378 logs.go:123] Gathering logs for coredns [6e5977a9cc40] ...
	I0213 15:13:22.500113    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e5977a9cc40"
	I0213 15:13:25.019842    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:26.532838    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:26.532861    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:30.021971    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:30.022190    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:13:30.043066    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:13:30.043168    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:13:30.059070    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:13:30.059139    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:13:30.072248    3378 logs.go:276] 4 containers: [70cce5993e6c 61a7fa439749 8ae24fa2b68a 5966c105587d]
	I0213 15:13:30.072314    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:13:30.083956    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:13:30.084027    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:13:30.103318    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:13:30.103386    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:13:30.113485    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:13:30.113554    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:13:30.123616    3378 logs.go:276] 0 containers: []
	W0213 15:13:30.123626    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:13:30.123683    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:13:30.134026    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:13:30.134042    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:13:30.134047    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:13:30.146801    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:13:30.146811    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:13:30.158551    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:13:30.158562    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:13:30.192251    3378 logs.go:123] Gathering logs for coredns [70cce5993e6c] ...
	I0213 15:13:30.192264    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70cce5993e6c"
	I0213 15:13:30.203242    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:13:30.203253    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:13:30.221105    3378 logs.go:123] Gathering logs for coredns [61a7fa439749] ...
	I0213 15:13:30.221116    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a7fa439749"
	I0213 15:13:30.232531    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:13:30.232541    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:13:30.270277    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:13:30.270291    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:13:30.284977    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:13:30.284989    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:13:30.299048    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:13:30.299058    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:13:30.310668    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:13:30.310679    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:13:30.331802    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:13:30.331812    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:13:30.353903    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:13:30.353909    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:13:30.358507    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:13:30.358513    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:13:30.370140    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:13:30.370150    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:13:31.533251    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:31.533294    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:32.883670    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:36.533585    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:36.533611    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:37.885876    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:37.886109    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:13:37.905772    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:13:37.905871    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:13:37.919752    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:13:37.919830    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:13:37.933863    3378 logs.go:276] 4 containers: [70cce5993e6c 61a7fa439749 8ae24fa2b68a 5966c105587d]
	I0213 15:13:37.933936    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:13:37.944030    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:13:37.944094    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:13:37.955655    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:13:37.955716    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:13:37.965947    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:13:37.966022    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:13:37.976376    3378 logs.go:276] 0 containers: []
	W0213 15:13:37.976388    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:13:37.976440    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:13:37.986734    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:13:37.986748    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:13:37.986754    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:13:38.021307    3378 logs.go:123] Gathering logs for coredns [61a7fa439749] ...
	I0213 15:13:38.021321    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a7fa439749"
	I0213 15:13:38.033436    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:13:38.033447    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:13:38.045827    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:13:38.045838    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:13:38.058621    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:13:38.058635    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:13:38.070537    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:13:38.070547    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:13:38.106192    3378 logs.go:123] Gathering logs for coredns [70cce5993e6c] ...
	I0213 15:13:38.106209    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70cce5993e6c"
	I0213 15:13:38.117910    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:13:38.117921    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:13:38.133718    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:13:38.133735    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:13:38.145398    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:13:38.145408    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:13:38.150244    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:13:38.150253    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:13:38.168175    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:13:38.168185    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:13:38.182295    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:13:38.182305    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:13:38.197564    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:13:38.197577    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:13:38.215187    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:13:38.215197    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:13:40.739572    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:41.534232    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:41.534279    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:45.741901    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:45.742128    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:13:45.763705    3378 logs.go:276] 1 containers: [f2de7ba62e3a]
	I0213 15:13:45.763800    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:13:45.778449    3378 logs.go:276] 1 containers: [1fa40f5f6ed4]
	I0213 15:13:45.778518    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:13:45.790972    3378 logs.go:276] 4 containers: [70cce5993e6c 61a7fa439749 8ae24fa2b68a 5966c105587d]
	I0213 15:13:45.791043    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:13:45.802065    3378 logs.go:276] 1 containers: [01510d85af63]
	I0213 15:13:45.802128    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:13:45.812915    3378 logs.go:276] 1 containers: [06765b9e6365]
	I0213 15:13:45.812979    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:13:45.823174    3378 logs.go:276] 1 containers: [2434d0c241f5]
	I0213 15:13:45.823236    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:13:45.833385    3378 logs.go:276] 0 containers: []
	W0213 15:13:45.833393    3378 logs.go:278] No container was found matching "kindnet"
	I0213 15:13:45.833446    3378 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:13:45.844356    3378 logs.go:276] 1 containers: [a609dae9ec7d]
	I0213 15:13:45.844373    3378 logs.go:123] Gathering logs for etcd [1fa40f5f6ed4] ...
	I0213 15:13:45.844378    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fa40f5f6ed4"
	I0213 15:13:45.857946    3378 logs.go:123] Gathering logs for kube-controller-manager [2434d0c241f5] ...
	I0213 15:13:45.857956    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2434d0c241f5"
	I0213 15:13:45.876155    3378 logs.go:123] Gathering logs for kubelet ...
	I0213 15:13:45.876166    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:13:45.911840    3378 logs.go:123] Gathering logs for dmesg ...
	I0213 15:13:45.911847    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:13:45.916097    3378 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:13:45.916103    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:13:45.950594    3378 logs.go:123] Gathering logs for coredns [70cce5993e6c] ...
	I0213 15:13:45.950604    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70cce5993e6c"
	I0213 15:13:45.963358    3378 logs.go:123] Gathering logs for coredns [5966c105587d] ...
	I0213 15:13:45.963369    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5966c105587d"
	I0213 15:13:45.975879    3378 logs.go:123] Gathering logs for kube-proxy [06765b9e6365] ...
	I0213 15:13:45.975894    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06765b9e6365"
	I0213 15:13:45.987407    3378 logs.go:123] Gathering logs for coredns [61a7fa439749] ...
	I0213 15:13:45.987416    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61a7fa439749"
	I0213 15:13:45.999550    3378 logs.go:123] Gathering logs for kube-scheduler [01510d85af63] ...
	I0213 15:13:45.999562    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01510d85af63"
	I0213 15:13:46.015029    3378 logs.go:123] Gathering logs for Docker ...
	I0213 15:13:46.015040    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:13:46.037647    3378 logs.go:123] Gathering logs for kube-apiserver [f2de7ba62e3a] ...
	I0213 15:13:46.037656    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2de7ba62e3a"
	I0213 15:13:46.052718    3378 logs.go:123] Gathering logs for coredns [8ae24fa2b68a] ...
	I0213 15:13:46.052728    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ae24fa2b68a"
	I0213 15:13:46.064933    3378 logs.go:123] Gathering logs for storage-provisioner [a609dae9ec7d] ...
	I0213 15:13:46.064943    3378 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a609dae9ec7d"
	I0213 15:13:46.076139    3378 logs.go:123] Gathering logs for container status ...
	I0213 15:13:46.076150    3378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:13:46.535524    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:46.535547    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:48.590017    3378 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:51.536694    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:51.536743    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:53.592165    3378 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:53.595661    3378 out.go:177] 
	W0213 15:13:53.599668    3378 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0213 15:13:53.599679    3378 out.go:239] * 
	W0213 15:13:53.600487    3378 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:13:53.611621    3378 out.go:177] 
	I0213 15:13:56.537636    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:56.537659    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:14:01.539432    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:14:01.539474    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:14:06.541607    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:14:06.541743    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:14:06.558661    3510 logs.go:276] 1 containers: [5e51f2323c75]
	I0213 15:14:06.558755    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:14:06.572199    3510 logs.go:276] 1 containers: [c7ccbfc9da3f]
	I0213 15:14:06.572275    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:14:06.583642    3510 logs.go:276] 2 containers: [c39f02d73180 82cfef7f8576]
	I0213 15:14:06.583704    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:14:06.594629    3510 logs.go:276] 1 containers: [6bd553391f1b]
	I0213 15:14:06.594702    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:14:06.605367    3510 logs.go:276] 1 containers: [1a94bf610354]
	I0213 15:14:06.605445    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:14:06.616271    3510 logs.go:276] 1 containers: [2cc58a5453f6]
	I0213 15:14:06.616345    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:14:06.626922    3510 logs.go:276] 0 containers: []
	W0213 15:14:06.626933    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:14:06.626995    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:14:06.637645    3510 logs.go:276] 1 containers: [06e60b7523c4]
	I0213 15:14:06.637660    3510 logs.go:123] Gathering logs for coredns [82cfef7f8576] ...
	I0213 15:14:06.637665    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cfef7f8576"
	I0213 15:14:06.649524    3510 logs.go:123] Gathering logs for kube-controller-manager [2cc58a5453f6] ...
	I0213 15:14:06.649535    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cc58a5453f6"
	I0213 15:14:06.667832    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:14:06.667843    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:14:06.672445    3510 logs.go:123] Gathering logs for coredns [c39f02d73180] ...
	I0213 15:14:06.672455    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39f02d73180"
	I0213 15:14:06.684132    3510 logs.go:123] Gathering logs for kube-scheduler [6bd553391f1b] ...
	I0213 15:14:06.684143    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bd553391f1b"
	I0213 15:14:06.699466    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:14:06.699481    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:14:06.723962    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:14:06.723974    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 15:14:06.754945    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:14:06.755043    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:14:06.755949    3510 logs.go:123] Gathering logs for etcd [c7ccbfc9da3f] ...
	I0213 15:14:06.755954    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7ccbfc9da3f"
	I0213 15:14:06.770473    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:14:06.770483    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:14:06.804875    3510 logs.go:123] Gathering logs for kube-proxy [1a94bf610354] ...
	I0213 15:14:06.804886    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a94bf610354"
	I0213 15:14:06.820955    3510 logs.go:123] Gathering logs for storage-provisioner [06e60b7523c4] ...
	I0213 15:14:06.820964    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06e60b7523c4"
	I0213 15:14:06.832744    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:14:06.832753    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:14:06.844692    3510 logs.go:123] Gathering logs for kube-apiserver [5e51f2323c75] ...
	I0213 15:14:06.844704    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e51f2323c75"
	I0213 15:14:06.858948    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:14:06.858962    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 15:14:06.858987    3510 out.go:239] X Problems detected in kubelet:
	W0213 15:14:06.858990    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:14:06.858994    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:14:06.858999    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:14:06.859002    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-02-13 23:04:26 UTC, ends at Tue 2024-02-13 23:14:09 UTC. --
	Feb 13 23:13:48 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:13:48Z" level=error msg="ContainerStats resp: {0x400077cbc0 linux}"
	Feb 13 23:13:48 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:13:48Z" level=error msg="ContainerStats resp: {0x400092a380 linux}"
	Feb 13 23:13:48 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:13:48Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Feb 13 23:13:49 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:13:49Z" level=error msg="ContainerStats resp: {0x400092b540 linux}"
	Feb 13 23:13:50 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:13:50Z" level=error msg="ContainerStats resp: {0x40004ac1c0 linux}"
	Feb 13 23:13:50 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:13:50Z" level=error msg="ContainerStats resp: {0x4000356a80 linux}"
	Feb 13 23:13:50 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:13:50Z" level=error msg="ContainerStats resp: {0x40004acd40 linux}"
	Feb 13 23:13:50 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:13:50Z" level=error msg="ContainerStats resp: {0x4000357a80 linux}"
	Feb 13 23:13:50 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:13:50Z" level=error msg="ContainerStats resp: {0x40004ad880 linux}"
	Feb 13 23:13:50 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:13:50Z" level=error msg="ContainerStats resp: {0x40004adf80 linux}"
	Feb 13 23:13:50 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:13:50Z" level=error msg="ContainerStats resp: {0x400041e600 linux}"
	Feb 13 23:13:53 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:13:53Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Feb 13 23:13:58 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:13:58Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Feb 13 23:14:00 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:14:00Z" level=error msg="ContainerStats resp: {0x4000994580 linux}"
	Feb 13 23:14:00 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:14:00Z" level=error msg="ContainerStats resp: {0x4000994f00 linux}"
	Feb 13 23:14:01 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:14:01Z" level=error msg="ContainerStats resp: {0x400092af40 linux}"
	Feb 13 23:14:02 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:14:02Z" level=error msg="ContainerStats resp: {0x40004ad640 linux}"
	Feb 13 23:14:02 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:14:02Z" level=error msg="ContainerStats resp: {0x40004ada80 linux}"
	Feb 13 23:14:02 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:14:02Z" level=error msg="ContainerStats resp: {0x40004adbc0 linux}"
	Feb 13 23:14:02 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:14:02Z" level=error msg="ContainerStats resp: {0x40004ade80 linux}"
	Feb 13 23:14:02 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:14:02Z" level=error msg="ContainerStats resp: {0x400079a040 linux}"
	Feb 13 23:14:02 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:14:02Z" level=error msg="ContainerStats resp: {0x40008a8b00 linux}"
	Feb 13 23:14:02 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:14:02Z" level=error msg="ContainerStats resp: {0x400079b140 linux}"
	Feb 13 23:14:03 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:14:03Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Feb 13 23:14:08 running-upgrade-781000 cri-dockerd[3055]: time="2024-02-13T23:14:08Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	70cce5993e6ca       edaa71f2aee88       45 seconds ago      Running             coredns                   2                   80b3968159dec
	61a7fa4397495       edaa71f2aee88       45 seconds ago      Running             coredns                   2                   cda9c5c2c2f7f
	8ae24fa2b68a1       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   cda9c5c2c2f7f
	5966c105587d0       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   80b3968159dec
	a609dae9ec7d2       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   d1cfc481ebd13
	06765b9e63657       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   4cb3a6a845cd5
	01510d85af63c       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   519482d0bcc9b
	1fa40f5f6ed4f       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   bb022b00620c3
	2434d0c241f50       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   18a680e43cc87
	f2de7ba62e3a2       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   dc39b41da015f
	
	
	==> coredns [5966c105587d] <==
	[INFO] plugin/reload: Running configuration MD5 = 15ba990df895ecddba8ce0ceabdc0ab8
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:60080 - 61130 "HINFO IN 7432504735117542570.5673685835479364006. udp 57 false 512" - - 0 6.004549057s
	[ERROR] plugin/errors: 2 7432504735117542570.5673685835479364006. HINFO: read udp 10.244.0.2:36872->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:44367 - 18147 "HINFO IN 7432504735117542570.5673685835479364006. udp 57 false 512" - - 0 6.002572135s
	[ERROR] plugin/errors: 2 7432504735117542570.5673685835479364006. HINFO: read udp 10.244.0.2:37608->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:51726 - 62795 "HINFO IN 7432504735117542570.5673685835479364006. udp 57 false 512" - - 0 4.004245384s
	[ERROR] plugin/errors: 2 7432504735117542570.5673685835479364006. HINFO: read udp 10.244.0.2:58055->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:37671 - 16911 "HINFO IN 7432504735117542570.5673685835479364006. udp 57 false 512" - - 0 2.00052564s
	[ERROR] plugin/errors: 2 7432504735117542570.5673685835479364006. HINFO: read udp 10.244.0.2:56088->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:38758 - 58482 "HINFO IN 7432504735117542570.5673685835479364006. udp 57 false 512" - - 0 2.000612794s
	[ERROR] plugin/errors: 2 7432504735117542570.5673685835479364006. HINFO: read udp 10.244.0.2:49884->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:41354 - 49286 "HINFO IN 7432504735117542570.5673685835479364006. udp 57 false 512" - - 0 2.000997204s
	[ERROR] plugin/errors: 2 7432504735117542570.5673685835479364006. HINFO: read udp 10.244.0.2:56984->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:48325 - 30861 "HINFO IN 7432504735117542570.5673685835479364006. udp 57 false 512" - - 0 2.000145313s
	[ERROR] plugin/errors: 2 7432504735117542570.5673685835479364006. HINFO: read udp 10.244.0.2:46414->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:39470 - 65215 "HINFO IN 7432504735117542570.5673685835479364006. udp 57 false 512" - - 0 2.000515384s
	[ERROR] plugin/errors: 2 7432504735117542570.5673685835479364006. HINFO: read udp 10.244.0.2:34825->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:59779 - 30232 "HINFO IN 7432504735117542570.5673685835479364006. udp 57 false 512" - - 0 2.000580528s
	[ERROR] plugin/errors: 2 7432504735117542570.5673685835479364006. HINFO: read udp 10.244.0.2:32869->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:56079 - 29251 "HINFO IN 7432504735117542570.5673685835479364006. udp 57 false 512" - - 0 2.000281868s
	[ERROR] plugin/errors: 2 7432504735117542570.5673685835479364006. HINFO: read udp 10.244.0.2:50958->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [61a7fa439749] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 15ba990df895ecddba8ce0ceabdc0ab8
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:35245 - 63692 "HINFO IN 1354566006491322214.8791996769730048458. udp 57 false 512" - - 0 6.002420795s
	[ERROR] plugin/errors: 2 1354566006491322214.8791996769730048458. HINFO: read udp 10.244.0.3:34823->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:39787 - 33868 "HINFO IN 1354566006491322214.8791996769730048458. udp 57 false 512" - - 0 6.003419672s
	[ERROR] plugin/errors: 2 1354566006491322214.8791996769730048458. HINFO: read udp 10.244.0.3:46802->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:42249 - 50707 "HINFO IN 1354566006491322214.8791996769730048458. udp 57 false 512" - - 0 4.001601386s
	[ERROR] plugin/errors: 2 1354566006491322214.8791996769730048458. HINFO: read udp 10.244.0.3:47929->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:40681 - 14438 "HINFO IN 1354566006491322214.8791996769730048458. udp 57 false 512" - - 0 2.00075466s
	[ERROR] plugin/errors: 2 1354566006491322214.8791996769730048458. HINFO: read udp 10.244.0.3:36230->10.0.2.3:53: i/o timeout
	
	
	==> coredns [70cce5993e6c] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 15ba990df895ecddba8ce0ceabdc0ab8
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:48921 - 9330 "HINFO IN 827186636021362985.1135851342285812654. udp 56 false 512" - - 0 6.002131406s
	[ERROR] plugin/errors: 2 827186636021362985.1135851342285812654. HINFO: read udp 10.244.0.2:39378->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:52945 - 52389 "HINFO IN 827186636021362985.1135851342285812654. udp 56 false 512" - - 0 6.00184813s
	[ERROR] plugin/errors: 2 827186636021362985.1135851342285812654. HINFO: read udp 10.244.0.2:54343->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:55428 - 9136 "HINFO IN 827186636021362985.1135851342285812654. udp 56 false 512" - - 0 4.001157232s
	[ERROR] plugin/errors: 2 827186636021362985.1135851342285812654. HINFO: read udp 10.244.0.2:33447->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:38421 - 38673 "HINFO IN 827186636021362985.1135851342285812654. udp 56 false 512" - - 0 2.001627514s
	[ERROR] plugin/errors: 2 827186636021362985.1135851342285812654. HINFO: read udp 10.244.0.2:44028->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:36154 - 17996 "HINFO IN 827186636021362985.1135851342285812654. udp 56 false 512" - - 0 2.000705877s
	[ERROR] plugin/errors: 2 827186636021362985.1135851342285812654. HINFO: read udp 10.244.0.2:41088->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:48857 - 57171 "HINFO IN 827186636021362985.1135851342285812654. udp 56 false 512" - - 0 2.0007408s
	[ERROR] plugin/errors: 2 827186636021362985.1135851342285812654. HINFO: read udp 10.244.0.2:40071->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:52143 - 15139 "HINFO IN 827186636021362985.1135851342285812654. udp 56 false 512" - - 0 2.000641103s
	[ERROR] plugin/errors: 2 827186636021362985.1135851342285812654. HINFO: read udp 10.244.0.2:32845->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:58482 - 54504 "HINFO IN 827186636021362985.1135851342285812654. udp 56 false 512" - - 0 2.001157081s
	[ERROR] plugin/errors: 2 827186636021362985.1135851342285812654. HINFO: read udp 10.244.0.2:47980->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:50217 - 7765 "HINFO IN 827186636021362985.1135851342285812654. udp 56 false 512" - - 0 2.000992539s
	[ERROR] plugin/errors: 2 827186636021362985.1135851342285812654. HINFO: read udp 10.244.0.2:56161->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:41960 - 36342 "HINFO IN 827186636021362985.1135851342285812654. udp 56 false 512" - - 0 2.00057582s
	[ERROR] plugin/errors: 2 827186636021362985.1135851342285812654. HINFO: read udp 10.244.0.2:49900->10.0.2.3:53: i/o timeout
	
	
	==> coredns [8ae24fa2b68a] <==
	[INFO] plugin/reload: Running configuration MD5 = 15ba990df895ecddba8ce0ceabdc0ab8
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:44027 - 45881 "HINFO IN 3036935211319739433.2459923489053880092. udp 57 false 512" - - 0 6.00184142s
	[ERROR] plugin/errors: 2 3036935211319739433.2459923489053880092. HINFO: read udp 10.244.0.3:47645->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:39493 - 21629 "HINFO IN 3036935211319739433.2459923489053880092. udp 57 false 512" - - 0 6.0027591s
	[ERROR] plugin/errors: 2 3036935211319739433.2459923489053880092. HINFO: read udp 10.244.0.3:46026->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:45663 - 22716 "HINFO IN 3036935211319739433.2459923489053880092. udp 57 false 512" - - 0 4.002109598s
	[ERROR] plugin/errors: 2 3036935211319739433.2459923489053880092. HINFO: read udp 10.244.0.3:39668->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:51753 - 29172 "HINFO IN 3036935211319739433.2459923489053880092. udp 57 false 512" - - 0 2.000978698s
	[ERROR] plugin/errors: 2 3036935211319739433.2459923489053880092. HINFO: read udp 10.244.0.3:58032->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:42860 - 21739 "HINFO IN 3036935211319739433.2459923489053880092. udp 57 false 512" - - 0 2.001081602s
	[ERROR] plugin/errors: 2 3036935211319739433.2459923489053880092. HINFO: read udp 10.244.0.3:59305->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:37083 - 45611 "HINFO IN 3036935211319739433.2459923489053880092. udp 57 false 512" - - 0 2.001342342s
	[ERROR] plugin/errors: 2 3036935211319739433.2459923489053880092. HINFO: read udp 10.244.0.3:34148->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:49907 - 6204 "HINFO IN 3036935211319739433.2459923489053880092. udp 57 false 512" - - 0 2.000244775s
	[ERROR] plugin/errors: 2 3036935211319739433.2459923489053880092. HINFO: read udp 10.244.0.3:52796->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:48951 - 5073 "HINFO IN 3036935211319739433.2459923489053880092. udp 57 false 512" - - 0 2.000893982s
	[ERROR] plugin/errors: 2 3036935211319739433.2459923489053880092. HINFO: read udp 10.244.0.3:51484->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:35734 - 63908 "HINFO IN 3036935211319739433.2459923489053880092. udp 57 false 512" - - 0 2.000639029s
	[ERROR] plugin/errors: 2 3036935211319739433.2459923489053880092. HINFO: read udp 10.244.0.3:39658->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:40226 - 38134 "HINFO IN 3036935211319739433.2459923489053880092. udp 57 false 512" - - 0 2.001291032s
	[ERROR] plugin/errors: 2 3036935211319739433.2459923489053880092. HINFO: read udp 10.244.0.3:42126->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-781000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-781000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb52fe04bc8b044b129ef2ff27607d20a9fceb93
	                    minikube.k8s.io/name=running-upgrade-781000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_13T15_09_22_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Feb 2024 23:09:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-781000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Feb 2024 23:14:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Feb 2024 23:09:22 +0000   Tue, 13 Feb 2024 23:09:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Feb 2024 23:09:22 +0000   Tue, 13 Feb 2024 23:09:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Feb 2024 23:09:22 +0000   Tue, 13 Feb 2024 23:09:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Feb 2024 23:09:22 +0000   Tue, 13 Feb 2024 23:09:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-781000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 8746d0374ce644db9c95babb24f90a42
	  System UUID:                8746d0374ce644db9c95babb24f90a42
	  Boot ID:                    4abe8ffc-6ca9-490a-bd6f-c8ddc22b16dc
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-dd5x4                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m34s
	  kube-system                 coredns-6d4b75cb6d-wkfs4                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m34s
	  kube-system                 etcd-running-upgrade-781000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-apiserver-running-upgrade-781000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-controller-manager-running-upgrade-781000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 kube-proxy-znmvm                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  kube-system                 kube-scheduler-running-upgrade-781000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m32s  kube-proxy       
	  Normal  NodeReady                4m48s  kubelet          Node running-upgrade-781000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m48s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m48s  kubelet          Node running-upgrade-781000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m48s  kubelet          Node running-upgrade-781000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m48s  kubelet          Node running-upgrade-781000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m48s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m34s  node-controller  Node running-upgrade-781000 event: Registered Node running-upgrade-781000 in Controller
	
	
	==> dmesg <==
	[  +0.065338] systemd-fstab-generator[494]: Ignoring "noauto" for root device
	[  +2.200346] systemd-fstab-generator[716]: Ignoring "noauto" for root device
	[  +1.634895] systemd-fstab-generator[873]: Ignoring "noauto" for root device
	[  +0.063125] systemd-fstab-generator[884]: Ignoring "noauto" for root device
	[  +0.057427] systemd-fstab-generator[895]: Ignoring "noauto" for root device
	[  +1.153915] kauditd_printk_skb: 75 callbacks suppressed
	[  +0.043097] systemd-fstab-generator[1045]: Ignoring "noauto" for root device
	[  +0.058340] systemd-fstab-generator[1056]: Ignoring "noauto" for root device
	[  +2.058220] systemd-fstab-generator[1284]: Ignoring "noauto" for root device
	[ +14.170516] systemd-fstab-generator[1951]: Ignoring "noauto" for root device
	[  +2.836326] systemd-fstab-generator[2222]: Ignoring "noauto" for root device
	[  +0.139999] systemd-fstab-generator[2256]: Ignoring "noauto" for root device
	[  +0.093104] systemd-fstab-generator[2267]: Ignoring "noauto" for root device
	[  +0.090369] systemd-fstab-generator[2280]: Ignoring "noauto" for root device
	[Feb13 23:05] kauditd_printk_skb: 25 callbacks suppressed
	[  +0.198492] systemd-fstab-generator[3011]: Ignoring "noauto" for root device
	[  +0.079703] systemd-fstab-generator[3023]: Ignoring "noauto" for root device
	[  +0.084661] systemd-fstab-generator[3034]: Ignoring "noauto" for root device
	[  +0.092158] systemd-fstab-generator[3048]: Ignoring "noauto" for root device
	[  +2.056598] systemd-fstab-generator[3202]: Ignoring "noauto" for root device
	[  +6.589541] systemd-fstab-generator[3735]: Ignoring "noauto" for root device
	[ +21.169603] kauditd_printk_skb: 68 callbacks suppressed
	[Feb13 23:09] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.255378] systemd-fstab-generator[11932]: Ignoring "noauto" for root device
	[  +5.139829] systemd-fstab-generator[12520]: Ignoring "noauto" for root device
	
	
	==> etcd [1fa40f5f6ed4] <==
	{"level":"info","ts":"2024-02-13T23:09:18.564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-02-13T23:09:18.564Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-02-13T23:09:18.570Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-13T23:09:18.570Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-13T23:09:18.570Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-13T23:09:18.570Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-02-13T23:09:18.570Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-02-13T23:09:18.850Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-13T23:09:18.850Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-13T23:09:18.850Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-02-13T23:09:18.850Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-02-13T23:09:18.850Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-02-13T23:09:18.850Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-02-13T23:09:18.850Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-02-13T23:09:18.850Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:09:18.853Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:09:18.853Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:09:18.853Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:09:18.853Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-781000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-13T23:09:18.853Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T23:09:18.854Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-13T23:09:18.862Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T23:09:18.863Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-02-13T23:09:18.863Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-13T23:09:18.863Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 23:14:10 up 9 min,  0 users,  load average: 0.62, 0.39, 0.21
	Linux running-upgrade-781000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [f2de7ba62e3a] <==
	I0213 23:09:20.237030       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0213 23:09:20.239017       1 cache.go:39] Caches are synced for autoregister controller
	I0213 23:09:20.239042       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0213 23:09:20.239494       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0213 23:09:20.240224       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0213 23:09:20.246717       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0213 23:09:20.271384       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0213 23:09:20.980732       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0213 23:09:21.151453       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0213 23:09:21.158196       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0213 23:09:21.158288       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0213 23:09:21.306957       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0213 23:09:21.318948       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0213 23:09:21.401094       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0213 23:09:21.403366       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0213 23:09:21.403839       1 controller.go:611] quota admission added evaluator for: endpoints
	I0213 23:09:21.405204       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0213 23:09:22.271459       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0213 23:09:22.551437       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0213 23:09:22.554672       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0213 23:09:22.568490       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0213 23:09:22.603434       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0213 23:09:36.201325       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0213 23:09:36.301694       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0213 23:09:37.088205       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [2434d0c241f5] <==
	I0213 23:09:36.234463       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0213 23:09:36.234485       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0213 23:09:36.247438       1 range_allocator.go:374] Set node running-upgrade-781000 PodCIDR to [10.244.0.0/24]
	I0213 23:09:36.274503       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0213 23:09:36.296708       1 shared_informer.go:262] Caches are synced for deployment
	I0213 23:09:36.302717       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0213 23:09:36.321619       1 shared_informer.go:262] Caches are synced for job
	I0213 23:09:36.321620       1 shared_informer.go:262] Caches are synced for cronjob
	I0213 23:09:36.371453       1 shared_informer.go:262] Caches are synced for persistent volume
	I0213 23:09:36.372517       1 shared_informer.go:262] Caches are synced for stateful set
	I0213 23:09:36.372537       1 shared_informer.go:262] Caches are synced for ephemeral
	I0213 23:09:36.372549       1 shared_informer.go:262] Caches are synced for expand
	I0213 23:09:36.379193       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0213 23:09:36.382846       1 shared_informer.go:262] Caches are synced for attach detach
	I0213 23:09:36.386226       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-dd5x4"
	I0213 23:09:36.391538       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-wkfs4"
	I0213 23:09:36.421420       1 shared_informer.go:262] Caches are synced for PVC protection
	I0213 23:09:36.421514       1 shared_informer.go:262] Caches are synced for disruption
	I0213 23:09:36.421539       1 disruption.go:371] Sending events to api server.
	I0213 23:09:36.442651       1 shared_informer.go:262] Caches are synced for resource quota
	I0213 23:09:36.457022       1 shared_informer.go:262] Caches are synced for resource quota
	I0213 23:09:36.486661       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0213 23:09:36.861304       1 shared_informer.go:262] Caches are synced for garbage collector
	I0213 23:09:36.871479       1 shared_informer.go:262] Caches are synced for garbage collector
	I0213 23:09:36.871488       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [06765b9e6365] <==
	I0213 23:09:37.056274       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0213 23:09:37.056313       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0213 23:09:37.056333       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0213 23:09:37.085889       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0213 23:09:37.085899       1 server_others.go:206] "Using iptables Proxier"
	I0213 23:09:37.085924       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0213 23:09:37.086052       1 server.go:661] "Version info" version="v1.24.1"
	I0213 23:09:37.086057       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0213 23:09:37.086330       1 config.go:317] "Starting service config controller"
	I0213 23:09:37.086339       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0213 23:09:37.086356       1 config.go:226] "Starting endpoint slice config controller"
	I0213 23:09:37.086359       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0213 23:09:37.086856       1 config.go:444] "Starting node config controller"
	I0213 23:09:37.086862       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0213 23:09:37.186686       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0213 23:09:37.186710       1 shared_informer.go:262] Caches are synced for service config
	I0213 23:09:37.186907       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [01510d85af63] <==
	W0213 23:09:20.182391       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0213 23:09:20.182403       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0213 23:09:20.182452       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0213 23:09:20.182465       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0213 23:09:20.182494       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0213 23:09:20.182505       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0213 23:09:20.182575       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0213 23:09:20.182588       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0213 23:09:21.029603       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0213 23:09:21.029814       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0213 23:09:21.096686       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0213 23:09:21.096949       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0213 23:09:21.142535       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0213 23:09:21.143088       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0213 23:09:21.153365       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0213 23:09:21.153615       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0213 23:09:21.172979       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0213 23:09:21.173070       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0213 23:09:21.190908       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0213 23:09:21.191051       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0213 23:09:21.197261       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0213 23:09:21.197385       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0213 23:09:21.222893       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0213 23:09:21.222932       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0213 23:09:23.176847       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-02-13 23:04:26 UTC, ends at Tue 2024-02-13 23:14:10 UTC. --
	Feb 13 23:09:23 running-upgrade-781000 kubelet[12526]: I0213 23:09:23.586187   12526 apiserver.go:52] "Watching apiserver"
	Feb 13 23:09:24 running-upgrade-781000 kubelet[12526]: I0213 23:09:24.009051   12526 reconciler.go:157] "Reconciler: start to sync state"
	Feb 13 23:09:24 running-upgrade-781000 kubelet[12526]: E0213 23:09:24.186670   12526 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-781000\" already exists" pod="kube-system/etcd-running-upgrade-781000"
	Feb 13 23:09:24 running-upgrade-781000 kubelet[12526]: E0213 23:09:24.386801   12526 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-781000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-781000"
	Feb 13 23:09:24 running-upgrade-781000 kubelet[12526]: E0213 23:09:24.587226   12526 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-781000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-781000"
	Feb 13 23:09:24 running-upgrade-781000 kubelet[12526]: I0213 23:09:24.784447   12526 request.go:601] Waited for 1.116597895s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Feb 13 23:09:24 running-upgrade-781000 kubelet[12526]: E0213 23:09:24.787091   12526 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-781000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-781000"
	Feb 13 23:09:36 running-upgrade-781000 kubelet[12526]: I0213 23:09:36.190007   12526 topology_manager.go:200] "Topology Admit Handler"
	Feb 13 23:09:36 running-upgrade-781000 kubelet[12526]: I0213 23:09:36.208785   12526 topology_manager.go:200] "Topology Admit Handler"
	Feb 13 23:09:36 running-upgrade-781000 kubelet[12526]: I0213 23:09:36.307756   12526 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 13 23:09:36 running-upgrade-781000 kubelet[12526]: I0213 23:09:36.307941   12526 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqk78\" (UniqueName: \"kubernetes.io/projected/78d05804-04cf-444b-8f3e-fbb3958970f1-kube-api-access-nqk78\") pod \"kube-proxy-znmvm\" (UID: \"78d05804-04cf-444b-8f3e-fbb3958970f1\") " pod="kube-system/kube-proxy-znmvm"
	Feb 13 23:09:36 running-upgrade-781000 kubelet[12526]: I0213 23:09:36.307963   12526 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx2bk\" (UniqueName: \"kubernetes.io/projected/d52c5184-fbb8-4213-82cf-01cdde86c07b-kube-api-access-jx2bk\") pod \"storage-provisioner\" (UID: \"d52c5184-fbb8-4213-82cf-01cdde86c07b\") " pod="kube-system/storage-provisioner"
	Feb 13 23:09:36 running-upgrade-781000 kubelet[12526]: I0213 23:09:36.307986   12526 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/78d05804-04cf-444b-8f3e-fbb3958970f1-kube-proxy\") pod \"kube-proxy-znmvm\" (UID: \"78d05804-04cf-444b-8f3e-fbb3958970f1\") " pod="kube-system/kube-proxy-znmvm"
	Feb 13 23:09:36 running-upgrade-781000 kubelet[12526]: I0213 23:09:36.308001   12526 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78d05804-04cf-444b-8f3e-fbb3958970f1-xtables-lock\") pod \"kube-proxy-znmvm\" (UID: \"78d05804-04cf-444b-8f3e-fbb3958970f1\") " pod="kube-system/kube-proxy-znmvm"
	Feb 13 23:09:36 running-upgrade-781000 kubelet[12526]: I0213 23:09:36.308011   12526 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78d05804-04cf-444b-8f3e-fbb3958970f1-lib-modules\") pod \"kube-proxy-znmvm\" (UID: \"78d05804-04cf-444b-8f3e-fbb3958970f1\") " pod="kube-system/kube-proxy-znmvm"
	Feb 13 23:09:36 running-upgrade-781000 kubelet[12526]: I0213 23:09:36.308022   12526 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d52c5184-fbb8-4213-82cf-01cdde86c07b-tmp\") pod \"storage-provisioner\" (UID: \"d52c5184-fbb8-4213-82cf-01cdde86c07b\") " pod="kube-system/storage-provisioner"
	Feb 13 23:09:36 running-upgrade-781000 kubelet[12526]: I0213 23:09:36.308270   12526 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 13 23:09:36 running-upgrade-781000 kubelet[12526]: I0213 23:09:36.388228   12526 topology_manager.go:200] "Topology Admit Handler"
	Feb 13 23:09:36 running-upgrade-781000 kubelet[12526]: I0213 23:09:36.395571   12526 topology_manager.go:200] "Topology Admit Handler"
	Feb 13 23:09:36 running-upgrade-781000 kubelet[12526]: I0213 23:09:36.508852   12526 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad036dd5-6f0f-48d8-846a-aff99ec43f2b-config-volume\") pod \"coredns-6d4b75cb6d-wkfs4\" (UID: \"ad036dd5-6f0f-48d8-846a-aff99ec43f2b\") " pod="kube-system/coredns-6d4b75cb6d-wkfs4"
	Feb 13 23:09:36 running-upgrade-781000 kubelet[12526]: I0213 23:09:36.508883   12526 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6mt8\" (UniqueName: \"kubernetes.io/projected/ad036dd5-6f0f-48d8-846a-aff99ec43f2b-kube-api-access-w6mt8\") pod \"coredns-6d4b75cb6d-wkfs4\" (UID: \"ad036dd5-6f0f-48d8-846a-aff99ec43f2b\") " pod="kube-system/coredns-6d4b75cb6d-wkfs4"
	Feb 13 23:09:36 running-upgrade-781000 kubelet[12526]: I0213 23:09:36.508905   12526 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg8cl\" (UniqueName: \"kubernetes.io/projected/b84004df-28be-44a9-949e-454b3a51ddc5-kube-api-access-fg8cl\") pod \"coredns-6d4b75cb6d-dd5x4\" (UID: \"b84004df-28be-44a9-949e-454b3a51ddc5\") " pod="kube-system/coredns-6d4b75cb6d-dd5x4"
	Feb 13 23:09:36 running-upgrade-781000 kubelet[12526]: I0213 23:09:36.508918   12526 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b84004df-28be-44a9-949e-454b3a51ddc5-config-volume\") pod \"coredns-6d4b75cb6d-dd5x4\" (UID: \"b84004df-28be-44a9-949e-454b3a51ddc5\") " pod="kube-system/coredns-6d4b75cb6d-dd5x4"
	Feb 13 23:13:25 running-upgrade-781000 kubelet[12526]: I0213 23:13:25.047663   12526 scope.go:110] "RemoveContainer" containerID="d447b53b1dd0e13e3425c3ebacaae3c01a9d5deb3e17c82d2ef584d49691c757"
	Feb 13 23:13:25 running-upgrade-781000 kubelet[12526]: I0213 23:13:25.069151   12526 scope.go:110] "RemoveContainer" containerID="6e5977a9cc4099faf8ff353eebe73d527b5309a0e66ea17763795306891e6da3"
	
	
	==> storage-provisioner [a609dae9ec7d] <==
	I0213 23:09:37.055148       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0213 23:09:37.072432       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0213 23:09:37.072450       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0213 23:09:37.077514       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0213 23:09:37.078319       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9f694e5d-1b50-4057-920c-a1a1e4b19e3f", APIVersion:"v1", ResourceVersion:"361", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-781000_01d23b3f-3fe7-435d-a521-6418a246c932 became leader
	I0213 23:09:37.078338       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-781000_01d23b3f-3fe7-435d-a521-6418a246c932!
	I0213 23:09:37.179168       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-781000_01d23b3f-3fe7-435d-a521-6418a246c932!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-781000 -n running-upgrade-781000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-781000 -n running-upgrade-781000: exit status 2 (15.577729625s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-781000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-781000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-781000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-781000: (2.393429791s)
--- FAIL: TestRunningBinaryUpgrade (656.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.22s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-274000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-274000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.722801042s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-274000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-274000 in cluster kubernetes-upgrade-274000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-274000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:06:34.242384    3448 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:06:34.242706    3448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:06:34.242712    3448 out.go:304] Setting ErrFile to fd 2...
	I0213 15:06:34.242715    3448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:06:34.242838    3448 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:06:34.244039    3448 out.go:298] Setting JSON to false
	I0213 15:06:34.260502    3448 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2016,"bootTime":1707863578,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:06:34.260561    3448 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:06:34.266373    3448 out.go:177] * [kubernetes-upgrade-274000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:06:34.274333    3448 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:06:34.278365    3448 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:06:34.274417    3448 notify.go:220] Checking for updates...
	I0213 15:06:34.281257    3448 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:06:34.284325    3448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:06:34.287346    3448 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:06:34.290329    3448 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:06:34.293700    3448 config.go:182] Loaded profile config "multinode-078000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:06:34.293762    3448 config.go:182] Loaded profile config "running-upgrade-781000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0213 15:06:34.293811    3448 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:06:34.298321    3448 out.go:177] * Using the qemu2 driver based on user configuration
	I0213 15:06:34.305307    3448 start.go:298] selected driver: qemu2
	I0213 15:06:34.305314    3448 start.go:902] validating driver "qemu2" against <nil>
	I0213 15:06:34.305319    3448 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:06:34.307558    3448 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:06:34.311298    3448 out.go:177] * Automatically selected the socket_vmnet network
	I0213 15:06:34.314329    3448 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 15:06:34.314345    3448 cni.go:84] Creating CNI manager for ""
	I0213 15:06:34.314352    3448 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 15:06:34.314357    3448 start_flags.go:321] config:
	{Name:kubernetes-upgrade-274000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-274000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:06:34.319007    3448 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:06:34.326296    3448 out.go:177] * Starting control plane node kubernetes-upgrade-274000 in cluster kubernetes-upgrade-274000
	I0213 15:06:34.330301    3448 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 15:06:34.330323    3448 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0213 15:06:34.330332    3448 cache.go:56] Caching tarball of preloaded images
	I0213 15:06:34.330391    3448 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:06:34.330396    3448 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0213 15:06:34.330448    3448 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/kubernetes-upgrade-274000/config.json ...
	I0213 15:06:34.330457    3448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/kubernetes-upgrade-274000/config.json: {Name:mk4f7391852b48e38df8164d0aa958319308879e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:06:34.330719    3448 start.go:365] acquiring machines lock for kubernetes-upgrade-274000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:06:34.330753    3448 start.go:369] acquired machines lock for "kubernetes-upgrade-274000" in 25.708µs
	I0213 15:06:34.330763    3448 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-274000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-274000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:06:34.330792    3448 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:06:34.334396    3448 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0213 15:06:34.355169    3448 start.go:159] libmachine.API.Create for "kubernetes-upgrade-274000" (driver="qemu2")
	I0213 15:06:34.355197    3448 client.go:168] LocalClient.Create starting
	I0213 15:06:34.355268    3448 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:06:34.355295    3448 main.go:141] libmachine: Decoding PEM data...
	I0213 15:06:34.355305    3448 main.go:141] libmachine: Parsing certificate...
	I0213 15:06:34.355346    3448 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:06:34.355367    3448 main.go:141] libmachine: Decoding PEM data...
	I0213 15:06:34.355374    3448 main.go:141] libmachine: Parsing certificate...
	I0213 15:06:34.355699    3448 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:06:34.489902    3448 main.go:141] libmachine: Creating SSH key...
	I0213 15:06:34.553653    3448 main.go:141] libmachine: Creating Disk image...
	I0213 15:06:34.553665    3448 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:06:34.553902    3448 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2
	I0213 15:06:34.587562    3448 main.go:141] libmachine: STDOUT: 
	I0213 15:06:34.587589    3448 main.go:141] libmachine: STDERR: 
	I0213 15:06:34.587669    3448 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2 +20000M
	I0213 15:06:34.598808    3448 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:06:34.598824    3448 main.go:141] libmachine: STDERR: 
	I0213 15:06:34.598841    3448 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2
	I0213 15:06:34.598847    3448 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:06:34.598877    3448 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:3d:08:43:43:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2
	I0213 15:06:34.600808    3448 main.go:141] libmachine: STDOUT: 
	I0213 15:06:34.600823    3448 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:06:34.600840    3448 client.go:171] LocalClient.Create took 245.64ms
	I0213 15:06:36.602521    3448 start.go:128] duration metric: createHost completed in 2.271734042s
	I0213 15:06:36.602571    3448 start.go:83] releasing machines lock for "kubernetes-upgrade-274000", held for 2.271833125s
	W0213 15:06:36.602597    3448 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:06:36.609565    3448 out.go:177] * Deleting "kubernetes-upgrade-274000" in qemu2 ...
	W0213 15:06:36.626804    3448 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:06:36.626822    3448 start.go:709] Will try again in 5 seconds ...
	I0213 15:06:41.628719    3448 start.go:365] acquiring machines lock for kubernetes-upgrade-274000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:06:41.629070    3448 start.go:369] acquired machines lock for "kubernetes-upgrade-274000" in 275.208µs
	I0213 15:06:41.629199    3448 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-274000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-274000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:06:41.629360    3448 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:06:41.637825    3448 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0213 15:06:41.677205    3448 start.go:159] libmachine.API.Create for "kubernetes-upgrade-274000" (driver="qemu2")
	I0213 15:06:41.677246    3448 client.go:168] LocalClient.Create starting
	I0213 15:06:41.677370    3448 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:06:41.677437    3448 main.go:141] libmachine: Decoding PEM data...
	I0213 15:06:41.677452    3448 main.go:141] libmachine: Parsing certificate...
	I0213 15:06:41.677510    3448 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:06:41.677549    3448 main.go:141] libmachine: Decoding PEM data...
	I0213 15:06:41.677563    3448 main.go:141] libmachine: Parsing certificate...
	I0213 15:06:41.678094    3448 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:06:41.808469    3448 main.go:141] libmachine: Creating SSH key...
	I0213 15:06:41.873518    3448 main.go:141] libmachine: Creating Disk image...
	I0213 15:06:41.873527    3448 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:06:41.873732    3448 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2
	I0213 15:06:41.886741    3448 main.go:141] libmachine: STDOUT: 
	I0213 15:06:41.886776    3448 main.go:141] libmachine: STDERR: 
	I0213 15:06:41.886854    3448 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2 +20000M
	I0213 15:06:41.898293    3448 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:06:41.898325    3448 main.go:141] libmachine: STDERR: 
	I0213 15:06:41.898337    3448 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2
	I0213 15:06:41.898342    3448 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:06:41.898383    3448 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:f6:e4:39:05:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2
	I0213 15:06:41.900188    3448 main.go:141] libmachine: STDOUT: 
	I0213 15:06:41.900210    3448 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:06:41.900222    3448 client.go:171] LocalClient.Create took 222.976042ms
	I0213 15:06:43.902288    3448 start.go:128] duration metric: createHost completed in 2.272946334s
	I0213 15:06:43.902328    3448 start.go:83] releasing machines lock for "kubernetes-upgrade-274000", held for 2.273281542s
	W0213 15:06:43.902534    3448 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-274000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-274000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:06:43.910908    3448 out.go:177] 
	W0213 15:06:43.914908    3448 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:06:43.914917    3448 out.go:239] * 
	* 
	W0213 15:06:43.915721    3448 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:06:43.925881    3448 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-274000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-274000
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-274000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-274000 status --format={{.Host}}: exit status 7 (32.761042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-274000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-274000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.190397084s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-274000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-274000 in cluster kubernetes-upgrade-274000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-274000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-274000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:06:44.071175    3467 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:06:44.071282    3467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:06:44.071285    3467 out.go:304] Setting ErrFile to fd 2...
	I0213 15:06:44.071287    3467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:06:44.071436    3467 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:06:44.072530    3467 out.go:298] Setting JSON to false
	I0213 15:06:44.089427    3467 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2026,"bootTime":1707863578,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:06:44.089491    3467 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:06:44.094881    3467 out.go:177] * [kubernetes-upgrade-274000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:06:44.101847    3467 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:06:44.105886    3467 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:06:44.101890    3467 notify.go:220] Checking for updates...
	I0213 15:06:44.112805    3467 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:06:44.115866    3467 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:06:44.118912    3467 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:06:44.121822    3467 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:06:44.125100    3467 config.go:182] Loaded profile config "kubernetes-upgrade-274000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0213 15:06:44.125352    3467 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:06:44.129823    3467 out.go:177] * Using the qemu2 driver based on existing profile
	I0213 15:06:44.136853    3467 start.go:298] selected driver: qemu2
	I0213 15:06:44.136858    3467 start.go:902] validating driver "qemu2" against &{Name:kubernetes-upgrade-274000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-274000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:06:44.136934    3467 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:06:44.139580    3467 cni.go:84] Creating CNI manager for ""
	I0213 15:06:44.139596    3467 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:06:44.139602    3467 start_flags.go:321] config:
	{Name:kubernetes-upgrade-274000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-27400
0 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:06:44.143951    3467 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:06:44.150785    3467 out.go:177] * Starting control plane node kubernetes-upgrade-274000 in cluster kubernetes-upgrade-274000
	I0213 15:06:44.154859    3467 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0213 15:06:44.154875    3467 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0213 15:06:44.154884    3467 cache.go:56] Caching tarball of preloaded images
	I0213 15:06:44.154936    3467 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:06:44.154941    3467 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0213 15:06:44.155010    3467 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/kubernetes-upgrade-274000/config.json ...
	I0213 15:06:44.155470    3467 start.go:365] acquiring machines lock for kubernetes-upgrade-274000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:06:44.155493    3467 start.go:369] acquired machines lock for "kubernetes-upgrade-274000" in 17.833µs
	I0213 15:06:44.155501    3467 start.go:96] Skipping create...Using existing machine configuration
	I0213 15:06:44.155507    3467 fix.go:54] fixHost starting: 
	I0213 15:06:44.155608    3467 fix.go:102] recreateIfNeeded on kubernetes-upgrade-274000: state=Stopped err=<nil>
	W0213 15:06:44.155618    3467 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 15:06:44.159829    3467 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-274000" ...
	I0213 15:06:44.166849    3467 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:f6:e4:39:05:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2
	I0213 15:06:44.168808    3467 main.go:141] libmachine: STDOUT: 
	I0213 15:06:44.168826    3467 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:06:44.168861    3467 fix.go:56] fixHost completed within 13.355417ms
	I0213 15:06:44.168866    3467 start.go:83] releasing machines lock for "kubernetes-upgrade-274000", held for 13.369375ms
	W0213 15:06:44.168870    3467 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:06:44.168900    3467 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:06:44.168904    3467 start.go:709] Will try again in 5 seconds ...
	I0213 15:06:49.170973    3467 start.go:365] acquiring machines lock for kubernetes-upgrade-274000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:06:49.171409    3467 start.go:369] acquired machines lock for "kubernetes-upgrade-274000" in 355.542µs
	I0213 15:06:49.171497    3467 start.go:96] Skipping create...Using existing machine configuration
	I0213 15:06:49.171516    3467 fix.go:54] fixHost starting: 
	I0213 15:06:49.172252    3467 fix.go:102] recreateIfNeeded on kubernetes-upgrade-274000: state=Stopped err=<nil>
	W0213 15:06:49.172280    3467 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 15:06:49.181754    3467 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-274000" ...
	I0213 15:06:49.185990    3467 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:f6:e4:39:05:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2
	I0213 15:06:49.196152    3467 main.go:141] libmachine: STDOUT: 
	I0213 15:06:49.196212    3467 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:06:49.196283    3467 fix.go:56] fixHost completed within 24.769125ms
	I0213 15:06:49.196301    3467 start.go:83] releasing machines lock for "kubernetes-upgrade-274000", held for 24.870083ms
	W0213 15:06:49.196475    3467 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-274000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-274000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:06:49.204625    3467 out.go:177] 
	W0213 15:06:49.207824    3467 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:06:49.207852    3467 out.go:239] * 
	* 
	W0213 15:06:49.210349    3467 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:06:49.218739    3467 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-274000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-274000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-274000 version --output=json: exit status 1 (64.87775ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-274000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:523: *** TestKubernetesUpgrade FAILED at 2024-02-13 15:06:49.298752 -0800 PST m=+1685.223836668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-274000 -n kubernetes-upgrade-274000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-274000 -n kubernetes-upgrade-274000: exit status 7 (35.004709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-274000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-274000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-274000
--- FAIL: TestKubernetesUpgrade (15.22s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=18170
- KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current702900390/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.51s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=18170
- KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3274665747/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (616.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.184798206 start -p stopped-upgrade-809000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.184798206 start -p stopped-upgrade-809000 --memory=2200 --vm-driver=qemu2 : (46.702164667s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.184798206 -p stopped-upgrade-809000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.184798206 -p stopped-upgrade-809000 stop: (12.112308333s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-809000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0213 15:08:34.943408    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt: no such file or directory
E0213 15:10:03.211704    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt: no such file or directory
E0213 15:11:08.461729    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/client.crt: no such file or directory
E0213 15:12:31.531159    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/client.crt: no such file or directory
E0213 15:13:34.937078    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-809000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9m17.521096709s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-809000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting control plane node stopped-upgrade-809000 in cluster stopped-upgrade-809000
	* Restarting existing qemu2 VM for "stopped-upgrade-809000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Verifying Kubernetes components...
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:07:53.493226    3510 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:07:53.493393    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:07:53.493398    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:07:53.493402    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:07:53.493573    3510 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:07:53.494746    3510 out.go:298] Setting JSON to false
	I0213 15:07:53.513584    3510 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2095,"bootTime":1707863578,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:07:53.513642    3510 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:07:53.518882    3510 out.go:177] * [stopped-upgrade-809000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:07:53.525855    3510 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:07:53.529846    3510 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:07:53.525904    3510 notify.go:220] Checking for updates...
	I0213 15:07:53.531314    3510 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:07:53.534838    3510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:07:53.537865    3510 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:07:53.540927    3510 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:07:53.545083    3510 config.go:182] Loaded profile config "stopped-upgrade-809000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0213 15:07:53.550185    3510 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0213 15:07:53.553016    3510 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:07:53.557884    3510 out.go:177] * Using the qemu2 driver based on existing profile
	I0213 15:07:53.564709    3510 start.go:298] selected driver: qemu2
	I0213 15:07:53.564715    3510 start.go:902] validating driver "qemu2" against &{Name:stopped-upgrade-809000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50344 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:sto
pped-upgrade-809000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:07:53.564769    3510 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:07:53.567480    3510 cni.go:84] Creating CNI manager for ""
	I0213 15:07:53.567494    3510 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:07:53.567501    3510 start_flags.go:321] config:
	{Name:stopped-upgrade-809000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50344 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-809000 Namespace:default APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:07:53.567592    3510 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:07:53.574829    3510 out.go:177] * Starting control plane node stopped-upgrade-809000 in cluster stopped-upgrade-809000
	I0213 15:07:53.578846    3510 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0213 15:07:53.578862    3510 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0213 15:07:53.578872    3510 cache.go:56] Caching tarball of preloaded images
	I0213 15:07:53.578933    3510 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:07:53.578939    3510 cache.go:59] Finished verifying existence of preloaded tar for  v1.24.1 on docker
	I0213 15:07:53.579005    3510 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/config.json ...
	I0213 15:07:53.579522    3510 start.go:365] acquiring machines lock for stopped-upgrade-809000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:07:53.579564    3510 start.go:369] acquired machines lock for "stopped-upgrade-809000" in 35.75µs
	I0213 15:07:53.579573    3510 start.go:96] Skipping create...Using existing machine configuration
	I0213 15:07:53.579579    3510 fix.go:54] fixHost starting: 
	I0213 15:07:53.579696    3510 fix.go:102] recreateIfNeeded on stopped-upgrade-809000: state=Stopped err=<nil>
	W0213 15:07:53.579704    3510 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 15:07:53.587810    3510 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-809000" ...
	I0213 15:07:53.591893    3510 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50309-:22,hostfwd=tcp::50310-:2376,hostname=stopped-upgrade-809000 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/disk.qcow2
	I0213 15:07:53.640304    3510 main.go:141] libmachine: STDOUT: 
	I0213 15:07:53.640337    3510 main.go:141] libmachine: STDERR: 
	I0213 15:07:53.640359    3510 main.go:141] libmachine: Waiting for VM to start (ssh -p 50309 docker@127.0.0.1)...
	I0213 15:08:14.283493    3510 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/config.json ...
	I0213 15:08:14.284156    3510 machine.go:88] provisioning docker machine ...
	I0213 15:08:14.284209    3510 buildroot.go:166] provisioning hostname "stopped-upgrade-809000"
	I0213 15:08:14.284379    3510 main.go:141] libmachine: Using SSH client type: native
	I0213 15:08:14.285148    3510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10049b8e0] 0x10049e050 <nil>  [] 0s} localhost 50309 <nil> <nil>}
	I0213 15:08:14.285168    3510 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-809000 && echo "stopped-upgrade-809000" | sudo tee /etc/hostname
	I0213 15:08:14.381516    3510 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-809000
	
	I0213 15:08:14.381600    3510 main.go:141] libmachine: Using SSH client type: native
	I0213 15:08:14.381975    3510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10049b8e0] 0x10049e050 <nil>  [] 0s} localhost 50309 <nil> <nil>}
	I0213 15:08:14.381988    3510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-809000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-809000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-809000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 15:08:14.453530    3510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 15:08:14.453544    3510 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18170-979/.minikube CaCertPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18170-979/.minikube}
	I0213 15:08:14.453554    3510 buildroot.go:174] setting up certificates
	I0213 15:08:14.453566    3510 provision.go:83] configureAuth start
	I0213 15:08:14.453570    3510 provision.go:138] copyHostCerts
	I0213 15:08:14.453687    3510 exec_runner.go:144] found /Users/jenkins/minikube-integration/18170-979/.minikube/ca.pem, removing ...
	I0213 15:08:14.453696    3510 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18170-979/.minikube/ca.pem
	I0213 15:08:14.453839    3510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18170-979/.minikube/ca.pem (1078 bytes)
	I0213 15:08:14.454054    3510 exec_runner.go:144] found /Users/jenkins/minikube-integration/18170-979/.minikube/cert.pem, removing ...
	I0213 15:08:14.454060    3510 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18170-979/.minikube/cert.pem
	I0213 15:08:14.454119    3510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18170-979/.minikube/cert.pem (1123 bytes)
	I0213 15:08:14.454240    3510 exec_runner.go:144] found /Users/jenkins/minikube-integration/18170-979/.minikube/key.pem, removing ...
	I0213 15:08:14.454244    3510 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18170-979/.minikube/key.pem
	I0213 15:08:14.454302    3510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18170-979/.minikube/key.pem (1675 bytes)
	I0213 15:08:14.454412    3510 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18170-979/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-809000 san=[127.0.0.1 localhost localhost 127.0.0.1 minikube stopped-upgrade-809000]
	I0213 15:08:14.487904    3510 provision.go:172] copyRemoteCerts
	I0213 15:08:14.487934    3510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 15:08:14.487941    3510 sshutil.go:53] new ssh client: &{IP:localhost Port:50309 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/id_rsa Username:docker}
	I0213 15:08:14.522972    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0213 15:08:14.529725    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 15:08:14.536705    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 15:08:14.543914    3510 provision.go:86] duration metric: configureAuth took 90.346042ms
	I0213 15:08:14.543922    3510 buildroot.go:189] setting minikube options for container-runtime
	I0213 15:08:14.544038    3510 config.go:182] Loaded profile config "stopped-upgrade-809000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0213 15:08:14.544072    3510 main.go:141] libmachine: Using SSH client type: native
	I0213 15:08:14.544294    3510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10049b8e0] 0x10049e050 <nil>  [] 0s} localhost 50309 <nil> <nil>}
	I0213 15:08:14.544300    3510 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0213 15:08:14.609769    3510 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0213 15:08:14.609778    3510 buildroot.go:70] root file system type: tmpfs
	I0213 15:08:14.609833    3510 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0213 15:08:14.609889    3510 main.go:141] libmachine: Using SSH client type: native
	I0213 15:08:14.610143    3510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10049b8e0] 0x10049e050 <nil>  [] 0s} localhost 50309 <nil> <nil>}
	I0213 15:08:14.610181    3510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0213 15:08:14.678717    3510 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0213 15:08:14.678766    3510 main.go:141] libmachine: Using SSH client type: native
	I0213 15:08:14.679035    3510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10049b8e0] 0x10049e050 <nil>  [] 0s} localhost 50309 <nil> <nil>}
	I0213 15:08:14.679045    3510 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0213 15:08:15.036824    3510 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0213 15:08:15.036837    3510 machine.go:91] provisioned docker machine in 752.68725ms
	I0213 15:08:15.036842    3510 start.go:300] post-start starting for "stopped-upgrade-809000" (driver="qemu2")
	I0213 15:08:15.036849    3510 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 15:08:15.036916    3510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 15:08:15.036925    3510 sshutil.go:53] new ssh client: &{IP:localhost Port:50309 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/id_rsa Username:docker}
	I0213 15:08:15.071947    3510 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 15:08:15.073138    3510 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 15:08:15.073144    3510 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18170-979/.minikube/addons for local assets ...
	I0213 15:08:15.073209    3510 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18170-979/.minikube/files for local assets ...
	I0213 15:08:15.073320    3510 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem -> 14072.pem in /etc/ssl/certs
	I0213 15:08:15.073440    3510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 15:08:15.076238    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem --> /etc/ssl/certs/14072.pem (1708 bytes)
	I0213 15:08:15.083111    3510 start.go:303] post-start completed in 46.264583ms
	I0213 15:08:15.083119    3510 fix.go:56] fixHost completed within 21.50400075s
	I0213 15:08:15.083156    3510 main.go:141] libmachine: Using SSH client type: native
	I0213 15:08:15.083394    3510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10049b8e0] 0x10049e050 <nil>  [] 0s} localhost 50309 <nil> <nil>}
	I0213 15:08:15.083399    3510 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0213 15:08:15.149231    3510 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707865695.424839254
	
	I0213 15:08:15.149238    3510 fix.go:206] guest clock: 1707865695.424839254
	I0213 15:08:15.149242    3510 fix.go:219] Guest: 2024-02-13 15:08:15.424839254 -0800 PST Remote: 2024-02-13 15:08:15.08312 -0800 PST m=+21.623028584 (delta=341.719254ms)
	I0213 15:08:15.149251    3510 fix.go:190] guest clock delta is within tolerance: 341.719254ms
	I0213 15:08:15.149257    3510 start.go:83] releasing machines lock for "stopped-upgrade-809000", held for 21.570149583s
	I0213 15:08:15.149304    3510 ssh_runner.go:195] Run: cat /version.json
	I0213 15:08:15.149311    3510 sshutil.go:53] new ssh client: &{IP:localhost Port:50309 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/id_rsa Username:docker}
	I0213 15:08:15.149330    3510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 15:08:15.149347    3510 sshutil.go:53] new ssh client: &{IP:localhost Port:50309 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/id_rsa Username:docker}
	W0213 15:08:15.149985    3510 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50309: connect: connection refused
	I0213 15:08:15.150001    3510 retry.go:31] will retry after 298.580206ms: dial tcp [::1]:50309: connect: connection refused
	W0213 15:08:15.181992    3510 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0213 15:08:15.182032    3510 ssh_runner.go:195] Run: systemctl --version
	I0213 15:08:15.184398    3510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 15:08:15.185811    3510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 15:08:15.185838    3510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0213 15:08:15.188728    3510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0213 15:08:15.193701    3510 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 15:08:15.193708    3510 start.go:475] detecting cgroup driver to use...
	I0213 15:08:15.193776    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 15:08:15.200195    3510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0213 15:08:15.203351    3510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0213 15:08:15.206096    3510 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0213 15:08:15.206125    3510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0213 15:08:15.209206    3510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 15:08:15.212664    3510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0213 15:08:15.215948    3510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0213 15:08:15.218876    3510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 15:08:15.221741    3510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0213 15:08:15.224976    3510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 15:08:15.228027    3510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 15:08:15.230688    3510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:08:15.310645    3510 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0213 15:08:15.316677    3510 start.go:475] detecting cgroup driver to use...
	I0213 15:08:15.316733    3510 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0213 15:08:15.322175    3510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 15:08:15.327007    3510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 15:08:15.335182    3510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 15:08:15.339309    3510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 15:08:15.344001    3510 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0213 15:08:15.400865    3510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0213 15:08:15.406315    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 15:08:15.411684    3510 ssh_runner.go:195] Run: which cri-dockerd
	I0213 15:08:15.412932    3510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0213 15:08:15.415828    3510 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0213 15:08:15.420525    3510 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0213 15:08:15.511129    3510 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0213 15:08:15.589948    3510 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0213 15:08:15.590012    3510 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0213 15:08:15.596132    3510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:08:15.673165    3510 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 15:08:16.830868    3510 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.157709333s)
	I0213 15:08:16.830941    3510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0213 15:08:16.835288    3510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 15:08:16.839436    3510 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0213 15:08:16.919344    3510 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0213 15:08:17.005101    3510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:08:17.086350    3510 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0213 15:08:17.092329    3510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0213 15:08:17.097152    3510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:08:17.177825    3510 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0213 15:08:17.215949    3510 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0213 15:08:17.216026    3510 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0213 15:08:17.218087    3510 start.go:543] Will wait 60s for crictl version
	I0213 15:08:17.218125    3510 ssh_runner.go:195] Run: which crictl
	I0213 15:08:17.219706    3510 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 15:08:17.235610    3510 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0213 15:08:17.235680    3510 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 15:08:17.252843    3510 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0213 15:08:17.277619    3510 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0213 15:08:17.277701    3510 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0213 15:08:17.279066    3510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 15:08:17.282629    3510 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0213 15:08:17.282670    3510 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 15:08:17.293535    3510 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0213 15:08:17.293544    3510 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0213 15:08:17.293592    3510 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 15:08:17.297041    3510 ssh_runner.go:195] Run: which lz4
	I0213 15:08:17.298293    3510 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0213 15:08:17.299450    3510 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 15:08:17.299460    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0213 15:08:18.036198    3510 docker.go:649] Took 0.737942 seconds to copy over tarball
	I0213 15:08:18.036252    3510 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 15:08:19.213501    3510 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.17726125s)
	I0213 15:08:19.213515    3510 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 15:08:19.229374    3510 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0213 15:08:19.232707    3510 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0213 15:08:19.237888    3510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 15:08:19.315220    3510 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0213 15:08:21.593089    3510 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.277900583s)
	I0213 15:08:21.593187    3510 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0213 15:08:21.607633    3510 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0213 15:08:21.607640    3510 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0213 15:08:21.607645    3510 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 15:08:21.622649    3510 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0213 15:08:21.622726    3510 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:08:21.622747    3510 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0213 15:08:21.622831    3510 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0213 15:08:21.622847    3510 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0213 15:08:21.622937    3510 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0213 15:08:21.622973    3510 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0213 15:08:21.622994    3510 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0213 15:08:21.631606    3510 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0213 15:08:21.631641    3510 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0213 15:08:21.631670    3510 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0213 15:08:21.631798    3510 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0213 15:08:21.631839    3510 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0213 15:08:21.632267    3510 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0213 15:08:21.632407    3510 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:08:21.632532    3510 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0213 15:08:23.852743    3510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0213 15:08:23.880462    3510 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0213 15:08:23.880501    3510 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0213 15:08:23.880593    3510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0213 15:08:23.899467    3510 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0213 15:08:23.902984    3510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0213 15:08:23.919878    3510 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0213 15:08:23.919901    3510 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0213 15:08:23.919956    3510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0213 15:08:23.931036    3510 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0213 15:08:23.932483    3510 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0213 15:08:23.934200    3510 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0213 15:08:23.934214    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0213 15:08:23.941497    3510 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0213 15:08:23.941507    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0213 15:08:23.950165    3510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0213 15:08:23.975993    3510 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0213 15:08:23.976024    3510 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0213 15:08:23.976043    3510 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0213 15:08:23.976100    3510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0213 15:08:23.986369    3510 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0213 15:08:23.988194    3510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0213 15:08:23.997956    3510 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0213 15:08:23.997976    3510 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0213 15:08:23.998038    3510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0213 15:08:24.003256    3510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0213 15:08:24.004535    3510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0213 15:08:24.010146    3510 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W0213 15:08:24.011792    3510 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0213 15:08:24.011908    3510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0213 15:08:24.015977    3510 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0213 15:08:24.015997    3510 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0213 15:08:24.016043    3510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0213 15:08:24.029878    3510 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0213 15:08:24.029898    3510 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0213 15:08:24.029953    3510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0213 15:08:24.030280    3510 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0213 15:08:24.030290    3510 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0213 15:08:24.030313    3510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0213 15:08:24.045767    3510 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0213 15:08:24.045787    3510 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0213 15:08:24.045818    3510 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0213 15:08:24.045883    3510 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0213 15:08:24.047294    3510 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0213 15:08:24.047305    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0213 15:08:24.084120    3510 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0213 15:08:24.084133    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0213 15:08:24.120713    3510 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0213 15:08:24.475156    3510 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0213 15:08:24.475402    3510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:08:24.495775    3510 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0213 15:08:24.495803    3510 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:08:24.495884    3510 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:08:24.516072    3510 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0213 15:08:24.516188    3510 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0213 15:08:24.517876    3510 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0213 15:08:24.517889    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0213 15:08:24.544664    3510 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0213 15:08:24.544682    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0213 15:08:24.781117    3510 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0213 15:08:24.781155    3510 cache_images.go:92] LoadImages completed in 3.173572208s
	W0213 15:08:24.781192    3510 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0213 15:08:24.781257    3510 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0213 15:08:24.794131    3510 cni.go:84] Creating CNI manager for ""
	I0213 15:08:24.794144    3510 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:08:24.794158    3510 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 15:08:24.794167    3510 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-809000 NodeName:stopped-upgrade-809000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 15:08:24.794242    3510 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-809000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 15:08:24.794275    3510 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-809000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-809000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 15:08:24.794325    3510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0213 15:08:24.797195    3510 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 15:08:24.797228    3510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 15:08:24.800183    3510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0213 15:08:24.805049    3510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 15:08:24.810047    3510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0213 15:08:24.815465    3510 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0213 15:08:24.816685    3510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 15:08:24.820232    3510 certs.go:56] Setting up /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000 for IP: 10.0.2.15
	I0213 15:08:24.820244    3510 certs.go:190] acquiring lock for shared ca certs: {Name:mk65e421691b8fb2c09fb65e08f20f9a769da9f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:08:24.820383    3510 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18170-979/.minikube/ca.key
	I0213 15:08:24.820428    3510 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.key
	I0213 15:08:24.820494    3510 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/client.key
	I0213 15:08:24.820539    3510 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/apiserver.key.49504c3e
	I0213 15:08:24.820583    3510 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/proxy-client.key
	I0213 15:08:24.820711    3510 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/1407.pem (1338 bytes)
	W0213 15:08:24.820743    3510 certs.go:433] ignoring /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/1407_empty.pem, impossibly tiny 0 bytes
	I0213 15:08:24.820749    3510 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 15:08:24.820777    3510 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem (1078 bytes)
	I0213 15:08:24.820805    3510 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem (1123 bytes)
	I0213 15:08:24.820830    3510 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/certs/Users/jenkins/minikube-integration/18170-979/.minikube/certs/key.pem (1675 bytes)
	I0213 15:08:24.820883    3510 certs.go:437] found cert: /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem (1708 bytes)
	I0213 15:08:24.821223    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 15:08:24.828553    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0213 15:08:24.836094    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 15:08:24.843293    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 15:08:24.850289    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 15:08:24.856770    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 15:08:24.864118    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 15:08:24.870995    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0213 15:08:24.877726    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/certs/1407.pem --> /usr/share/ca-certificates/1407.pem (1338 bytes)
	I0213 15:08:24.884390    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/ssl/certs/14072.pem --> /usr/share/ca-certificates/14072.pem (1708 bytes)
	I0213 15:08:24.890760    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 15:08:24.897283    3510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 15:08:24.902488    3510 ssh_runner.go:195] Run: openssl version
	I0213 15:08:24.904413    3510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14072.pem && ln -fs /usr/share/ca-certificates/14072.pem /etc/ssl/certs/14072.pem"
	I0213 15:08:24.908009    3510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14072.pem
	I0213 15:08:24.909370    3510 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:48 /usr/share/ca-certificates/14072.pem
	I0213 15:08:24.909393    3510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14072.pem
	I0213 15:08:24.911152    3510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14072.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 15:08:24.913859    3510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 15:08:24.916763    3510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 15:08:24.918353    3510 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 22:40 /usr/share/ca-certificates/minikubeCA.pem
	I0213 15:08:24.918380    3510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 15:08:24.920268    3510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 15:08:24.923652    3510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1407.pem && ln -fs /usr/share/ca-certificates/1407.pem /etc/ssl/certs/1407.pem"
	I0213 15:08:24.926566    3510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1407.pem
	I0213 15:08:24.928013    3510 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:48 /usr/share/ca-certificates/1407.pem
	I0213 15:08:24.928034    3510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1407.pem
	I0213 15:08:24.929813    3510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1407.pem /etc/ssl/certs/51391683.0"
	I0213 15:08:24.933003    3510 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 15:08:24.934372    3510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 15:08:24.936191    3510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 15:08:24.938182    3510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 15:08:24.939838    3510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 15:08:24.941607    3510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 15:08:24.943271    3510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 15:08:24.944986    3510 kubeadm.go:404] StartCluster: {Name:stopped-upgrade-809000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50344 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 Clus
terName:stopped-upgrade-809000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:08:24.945057    3510 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 15:08:24.955455    3510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 15:08:24.958295    3510 host.go:66] Checking if "stopped-upgrade-809000" exists ...
	I0213 15:08:24.959334    3510 main.go:141] libmachine: Using SSH client type: external
	I0213 15:08:24.959348    3510 main.go:141] libmachine: Using SSH private key: /Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/id_rsa (-rw-------)
	I0213 15:08:24.959369    3510 main.go:141] libmachine: &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/id_rsa -p 50309] /usr/bin/ssh <nil>}
	I0213 15:08:24.959385    3510 main.go:141] libmachine: /usr/bin/ssh -F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/id_rsa -p 50309 -f -NTL 50344:localhost:8443
	I0213 15:08:25.000785    3510 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 15:08:25.000818    3510 kubeadm.go:636] restartCluster start
	I0213 15:08:25.000897    3510 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 15:08:25.004347    3510 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 15:08:25.004696    3510 kubeconfig.go:135] verify returned: extract IP: "stopped-upgrade-809000" does not appear in /Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:08:25.004795    3510 kubeconfig.go:146] "stopped-upgrade-809000" context is missing from /Users/jenkins/minikube-integration/18170-979/kubeconfig - will repair!
	I0213 15:08:25.004993    3510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/kubeconfig: {Name:mkf66d96abab1e512e6f2721c341e70e5b11c9ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:08:25.005443    3510 kapi.go:59] client config for stopped-upgrade-809000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/client.key", CAFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101777f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 15:08:25.005909    3510 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 15:08:25.008601    3510 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-809000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0213 15:08:25.008607    3510 kubeadm.go:1135] stopping kube-system containers ...
	I0213 15:08:25.008643    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0213 15:08:25.018984    3510 docker.go:483] Stopping containers: [414a8117b44a ae16ebf684a6 2c330ef72602 c9bca2ddc84e ea48366b9587 1d0476e0f407 ad6284b5b306 30659c73ce71]
	I0213 15:08:25.019049    3510 ssh_runner.go:195] Run: docker stop 414a8117b44a ae16ebf684a6 2c330ef72602 c9bca2ddc84e ea48366b9587 1d0476e0f407 ad6284b5b306 30659c73ce71
	I0213 15:08:25.029947    3510 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 15:08:25.035692    3510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 15:08:25.038381    3510 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 15:08:25.038409    3510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 15:08:25.041138    3510 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 15:08:25.041144    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 15:08:25.063725    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 15:08:25.346053    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 15:08:25.494183    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 15:08:25.524711    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 15:08:25.551786    3510 api_server.go:52] waiting for apiserver process to appear ...
	I0213 15:08:25.551871    3510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 15:08:26.054145    3510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 15:08:26.553926    3510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 15:08:27.053596    3510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 15:08:27.057644    3510 api_server.go:72] duration metric: took 1.5058935s to wait for apiserver process to appear ...
	I0213 15:08:27.057654    3510 api_server.go:88] waiting for apiserver healthz status ...
	I0213 15:08:27.057663    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:32.059716    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:32.059750    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:37.059926    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:37.059994    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:42.060332    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:42.060369    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:47.060772    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:47.060861    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:52.061509    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:52.061528    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:08:57.062828    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:08:57.062897    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:09:02.064118    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:02.064148    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:09:07.065666    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:07.065733    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:09:12.068425    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:12.068455    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:09:17.070563    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:17.070582    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:09:22.072713    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:22.072757    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:09:27.075076    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:27.075320    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:09:27.104011    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:09:27.104142    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:09:27.120389    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:09:27.120482    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:09:27.133433    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:09:27.133508    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:09:27.144811    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:09:27.144894    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:09:27.155641    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:09:27.155702    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:09:27.166246    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:09:27.166315    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:09:27.176987    3510 logs.go:276] 0 containers: []
	W0213 15:09:27.176998    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:09:27.177053    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:09:27.187520    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:09:27.187537    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:09:27.187543    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:09:27.192534    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:09:27.192541    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:09:27.209616    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:09:27.209625    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:09:27.237800    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:09:27.237810    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:09:27.263589    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:09:27.263599    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:09:27.278694    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:09:27.278705    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:09:27.290369    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:09:27.290381    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:09:27.307332    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:09:27.307342    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:09:27.318590    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:09:27.318601    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:09:27.332468    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:09:27.332478    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:09:27.458411    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:09:27.458423    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:09:27.469902    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:09:27.469915    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:09:27.482424    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:09:27.482435    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:09:27.501878    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:09:27.501890    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:09:27.516156    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:09:27.516164    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:09:27.529800    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:09:27.529811    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:09:27.546372    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:09:27.546383    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:09:30.060728    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:09:35.061607    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:35.061848    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:09:35.078133    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:09:35.078222    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:09:35.092781    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:09:35.092855    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:09:35.104021    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:09:35.104096    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:09:35.115004    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:09:35.115086    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:09:35.125538    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:09:35.125603    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:09:35.136337    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:09:35.136414    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:09:35.158566    3510 logs.go:276] 0 containers: []
	W0213 15:09:35.158578    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:09:35.158643    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:09:35.169052    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:09:35.169065    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:09:35.169072    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:09:35.183718    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:09:35.183732    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:09:35.198066    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:09:35.198079    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:09:35.210177    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:09:35.210189    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:09:35.224146    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:09:35.224159    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:09:35.236082    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:09:35.236092    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:09:35.247181    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:09:35.247192    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:09:35.261518    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:09:35.261525    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:09:35.276300    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:09:35.276314    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:09:35.288404    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:09:35.288415    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:09:35.307008    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:09:35.307019    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:09:35.331728    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:09:35.331737    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:09:35.343790    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:09:35.343800    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:09:35.382331    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:09:35.382342    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:09:35.409300    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:09:35.409311    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:09:35.420501    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:09:35.420514    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:09:35.435520    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:09:35.435530    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:09:37.941900    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:09:42.944106    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:42.944263    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:09:42.965267    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:09:42.965455    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:09:42.978090    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:09:42.978157    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:09:42.988850    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:09:42.988929    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:09:42.999983    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:09:43.000071    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:09:43.011009    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:09:43.011076    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:09:43.021982    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:09:43.022054    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:09:43.032362    3510 logs.go:276] 0 containers: []
	W0213 15:09:43.032382    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:09:43.032444    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:09:43.042914    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:09:43.042936    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:09:43.042941    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:09:43.058350    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:09:43.058360    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:09:43.080004    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:09:43.080016    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:09:43.103922    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:09:43.103930    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:09:43.116206    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:09:43.116217    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:09:43.155521    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:09:43.155531    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:09:43.175862    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:09:43.175873    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:09:43.200424    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:09:43.200437    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:09:43.212178    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:09:43.212190    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:09:43.226923    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:09:43.226933    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:09:43.239041    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:09:43.239050    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:09:43.252706    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:09:43.252716    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:09:43.267521    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:09:43.267529    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:09:43.271704    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:09:43.271711    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:09:43.283131    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:09:43.283145    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:09:43.298136    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:09:43.298147    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:09:43.312983    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:09:43.312993    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:09:45.826771    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:09:50.827631    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:50.827811    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:09:50.849586    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:09:50.849701    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:09:50.864505    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:09:50.864585    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:09:50.878516    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:09:50.878580    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:09:50.889532    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:09:50.889597    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:09:50.899834    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:09:50.899905    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:09:50.910665    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:09:50.910753    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:09:50.920216    3510 logs.go:276] 0 containers: []
	W0213 15:09:50.920228    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:09:50.920291    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:09:50.930957    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:09:50.930971    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:09:50.930981    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:09:50.945391    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:09:50.945398    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:09:50.958905    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:09:50.958915    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:09:50.974427    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:09:50.974438    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:09:50.985796    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:09:50.985807    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:09:50.997466    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:09:50.997477    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:09:51.009148    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:09:51.009158    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:09:51.020454    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:09:51.020468    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:09:51.056995    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:09:51.057005    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:09:51.078680    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:09:51.078691    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:09:51.093156    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:09:51.093167    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:09:51.104541    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:09:51.104553    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:09:51.129590    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:09:51.129599    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:09:51.140915    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:09:51.140925    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:09:51.145155    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:09:51.145161    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:09:51.170727    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:09:51.170737    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:09:51.185479    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:09:51.185488    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:09:53.704488    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:09:58.706666    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:09:58.706791    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:09:58.721084    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:09:58.721163    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:09:58.732717    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:09:58.732798    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:09:58.743535    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:09:58.743603    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:09:58.754573    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:09:58.754655    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:09:58.764835    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:09:58.764910    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:09:58.775345    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:09:58.775423    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:09:58.785797    3510 logs.go:276] 0 containers: []
	W0213 15:09:58.785808    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:09:58.785865    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:09:58.796630    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:09:58.796644    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:09:58.796650    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:09:58.811028    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:09:58.811039    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:09:58.822552    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:09:58.822568    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:09:58.837626    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:09:58.837638    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:09:58.849178    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:09:58.849189    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:09:58.867735    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:09:58.867749    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:09:58.879882    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:09:58.879893    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:09:58.884303    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:09:58.884309    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:09:58.919234    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:09:58.919245    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:09:58.934178    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:09:58.934186    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:09:58.948254    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:09:58.948264    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:09:58.959463    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:09:58.959472    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:09:58.973137    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:09:58.973147    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:09:58.990538    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:09:58.990548    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:09:59.004364    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:09:59.004374    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:09:59.028386    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:09:59.028394    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:09:59.053320    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:09:59.053332    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:10:01.566891    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:06.569080    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:06.569332    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:10:06.589049    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:10:06.589119    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:10:06.600672    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:10:06.600741    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:10:06.612999    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:10:06.613065    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:10:06.623476    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:10:06.623540    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:10:06.634298    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:10:06.634373    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:10:06.650220    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:10:06.650287    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:10:06.660778    3510 logs.go:276] 0 containers: []
	W0213 15:10:06.660791    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:10:06.660861    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:10:06.674322    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:10:06.674336    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:10:06.674342    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:10:06.688365    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:10:06.688375    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:10:06.712619    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:10:06.712632    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:10:06.724757    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:10:06.724767    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:10:06.736580    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:10:06.736590    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:10:06.749535    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:10:06.749546    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:10:06.764385    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:10:06.764396    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:10:06.775798    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:10:06.775810    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:10:06.787529    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:10:06.787539    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:10:06.802938    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:10:06.802950    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:10:06.814209    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:10:06.814219    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:10:06.828753    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:10:06.828760    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:10:06.832762    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:10:06.832768    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:10:06.868297    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:10:06.868309    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:10:06.882767    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:10:06.882776    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:10:06.907773    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:10:06.907785    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:10:06.924872    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:10:06.924882    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:10:09.441365    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:14.443679    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:14.443873    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:10:14.473948    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:10:14.474069    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:10:14.492039    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:10:14.492137    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:10:14.517058    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:10:14.517126    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:10:14.528008    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:10:14.528089    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:10:14.538799    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:10:14.538864    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:10:14.549443    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:10:14.549513    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:10:14.559455    3510 logs.go:276] 0 containers: []
	W0213 15:10:14.559464    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:10:14.559515    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:10:14.569914    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:10:14.569930    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:10:14.569939    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:10:14.583474    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:10:14.583483    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:10:14.595991    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:10:14.596001    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:10:14.610155    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:10:14.610166    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:10:14.628861    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:10:14.628875    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:10:14.640968    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:10:14.640980    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:10:14.656000    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:10:14.656007    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:10:14.681617    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:10:14.681627    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:10:14.705024    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:10:14.705035    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:10:14.721884    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:10:14.721895    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:10:14.733446    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:10:14.733458    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:10:14.737471    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:10:14.737478    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:10:14.751289    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:10:14.751297    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:10:14.775590    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:10:14.775600    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:10:14.790182    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:10:14.790195    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:10:14.825325    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:10:14.825338    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:10:14.838831    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:10:14.838840    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:10:17.351278    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:22.353618    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:22.353863    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:10:22.382173    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:10:22.382301    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:10:22.399620    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:10:22.399704    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:10:22.413905    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:10:22.413968    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:10:22.425003    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:10:22.425079    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:10:22.435747    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:10:22.435812    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:10:22.447114    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:10:22.447185    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:10:22.457754    3510 logs.go:276] 0 containers: []
	W0213 15:10:22.457767    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:10:22.457823    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:10:22.468175    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:10:22.468190    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:10:22.468195    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:10:22.493568    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:10:22.493580    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:10:22.529780    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:10:22.529792    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:10:22.556520    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:10:22.556532    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:10:22.570575    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:10:22.570585    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:10:22.582482    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:10:22.582493    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:10:22.594573    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:10:22.594585    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:10:22.607259    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:10:22.607270    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:10:22.621245    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:10:22.621256    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:10:22.636312    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:10:22.636322    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:10:22.648087    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:10:22.648098    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:10:22.666675    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:10:22.666687    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:10:22.679195    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:10:22.679207    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:10:22.694251    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:10:22.694260    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:10:22.698248    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:10:22.698255    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:10:22.716473    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:10:22.716483    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:10:22.733653    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:10:22.733665    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:10:25.247221    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:30.249477    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:30.249751    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:10:30.284340    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:10:30.284474    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:10:30.304167    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:10:30.304272    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:10:30.319146    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:10:30.319217    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:10:30.331057    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:10:30.331126    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:10:30.341897    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:10:30.341966    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:10:30.351989    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:10:30.352064    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:10:30.362580    3510 logs.go:276] 0 containers: []
	W0213 15:10:30.362597    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:10:30.362654    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:10:30.372682    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:10:30.372699    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:10:30.372705    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:10:30.390446    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:10:30.390458    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:10:30.413672    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:10:30.413692    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:10:30.424907    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:10:30.424917    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:10:30.436500    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:10:30.436511    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:10:30.450164    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:10:30.450173    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:10:30.464518    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:10:30.464527    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:10:30.500095    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:10:30.500106    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:10:30.514938    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:10:30.514949    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:10:30.563289    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:10:30.563301    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:10:30.575222    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:10:30.575232    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:10:30.587824    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:10:30.587835    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:10:30.592288    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:10:30.592294    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:10:30.618857    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:10:30.618867    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:10:30.632715    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:10:30.632726    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:10:30.644358    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:10:30.644371    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:10:30.656822    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:10:30.656833    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:10:33.172759    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:38.174845    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:38.175100    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:10:38.202037    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:10:38.202149    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:10:38.224320    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:10:38.224413    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:10:38.240649    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:10:38.240716    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:10:38.254651    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:10:38.254722    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:10:38.265614    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:10:38.265667    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:10:38.276250    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:10:38.276325    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:10:38.287043    3510 logs.go:276] 0 containers: []
	W0213 15:10:38.287055    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:10:38.287098    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:10:38.300419    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:10:38.300437    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:10:38.300442    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:10:38.312712    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:10:38.312724    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:10:38.327203    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:10:38.327214    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:10:38.338427    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:10:38.338440    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:10:38.362839    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:10:38.362851    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:10:38.377319    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:10:38.377332    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:10:38.381582    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:10:38.381589    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:10:38.406304    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:10:38.406318    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:10:38.419931    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:10:38.419944    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:10:38.433911    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:10:38.433922    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:10:38.452030    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:10:38.452040    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:10:38.470385    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:10:38.470398    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:10:38.484649    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:10:38.484660    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:10:38.522040    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:10:38.522051    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:10:38.537050    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:10:38.537064    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:10:38.550555    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:10:38.550567    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:10:38.565888    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:10:38.565902    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:10:41.080061    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:46.082011    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:46.082258    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:10:46.106531    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:10:46.106628    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:10:46.122619    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:10:46.122714    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:10:46.135774    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:10:46.135840    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:10:46.147738    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:10:46.147804    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:10:46.158449    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:10:46.158516    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:10:46.168768    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:10:46.168837    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:10:46.179440    3510 logs.go:276] 0 containers: []
	W0213 15:10:46.179450    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:10:46.179502    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:10:46.194676    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:10:46.194691    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:10:46.194697    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:10:46.209188    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:10:46.209195    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:10:46.221093    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:10:46.221104    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:10:46.225268    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:10:46.225274    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:10:46.242112    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:10:46.242122    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:10:46.253582    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:10:46.253593    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:10:46.265402    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:10:46.265414    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:10:46.279133    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:10:46.279144    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:10:46.290379    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:10:46.290394    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:10:46.310060    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:10:46.310071    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:10:46.332655    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:10:46.332664    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:10:46.346017    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:10:46.346028    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:10:46.366216    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:10:46.366228    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:10:46.400905    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:10:46.400917    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:10:46.426109    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:10:46.426118    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:10:46.440378    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:10:46.440389    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:10:46.455649    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:10:46.455658    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:10:48.969489    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:10:53.971596    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:10:53.971742    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:10:53.985648    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:10:53.985725    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:10:53.996813    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:10:53.996876    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:10:54.007945    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:10:54.008003    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:10:54.018447    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:10:54.018513    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:10:54.028819    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:10:54.028880    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:10:54.039730    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:10:54.039797    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:10:54.050393    3510 logs.go:276] 0 containers: []
	W0213 15:10:54.050402    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:10:54.050452    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:10:54.060422    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:10:54.060437    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:10:54.060476    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:10:54.076851    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:10:54.076863    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:10:54.088384    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:10:54.088395    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:10:54.113386    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:10:54.113399    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:10:54.129346    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:10:54.129358    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:10:54.140793    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:10:54.140803    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:10:54.157625    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:10:54.157636    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:10:54.172913    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:10:54.172925    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:10:54.177276    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:10:54.177283    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:10:54.215038    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:10:54.215049    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:10:54.228668    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:10:54.228678    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:10:54.252466    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:10:54.252473    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:10:54.267531    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:10:54.267542    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:10:54.279657    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:10:54.279668    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:10:54.293252    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:10:54.293262    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:10:54.308987    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:10:54.308998    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:10:54.320580    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:10:54.320591    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:10:56.834455    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:01.836793    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:01.837018    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:01.862149    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:11:01.862241    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:01.880672    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:11:01.880763    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:01.893605    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:11:01.893674    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:01.904145    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:11:01.904220    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:01.926183    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:11:01.926247    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:01.936543    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:11:01.936610    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:01.950939    3510 logs.go:276] 0 containers: []
	W0213 15:11:01.950951    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:01.951009    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:01.961590    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:11:01.961605    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:11:01.961610    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:11:01.975278    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:11:01.975289    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:11:01.987406    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:11:01.987417    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:01.999388    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:11:01.999400    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:11:02.024203    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:11:02.024213    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:11:02.037733    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:11:02.037743    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:11:02.051679    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:11:02.051689    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:11:02.064062    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:02.064074    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:02.068635    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:02.068642    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:02.103488    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:02.103499    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:02.127872    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:11:02.127879    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:11:02.143136    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:11:02.143151    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:11:02.154777    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:11:02.154787    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:11:02.166642    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:02.166653    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:02.181761    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:11:02.181770    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:11:02.195289    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:11:02.195300    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:11:02.206041    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:11:02.206052    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:11:04.725450    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:09.727754    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:09.727975    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:09.750519    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:11:09.750621    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:09.767659    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:11:09.767733    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:09.782999    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:11:09.783062    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:09.793656    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:11:09.793719    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:09.804289    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:11:09.804356    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:09.814976    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:11:09.815034    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:09.829486    3510 logs.go:276] 0 containers: []
	W0213 15:11:09.829499    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:09.829557    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:09.840214    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:11:09.840232    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:11:09.840237    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:11:09.857617    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:11:09.857627    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:09.869161    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:09.869172    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:09.873123    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:11:09.873129    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:11:09.886974    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:11:09.886986    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:11:09.898330    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:11:09.898341    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:11:09.918705    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:11:09.918715    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:11:09.930324    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:11:09.930336    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:11:09.942694    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:11:09.942705    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:11:09.953674    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:09.953686    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:09.990287    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:11:09.990297    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:11:10.015635    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:11:10.015646    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:11:10.029923    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:11:10.029934    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:11:10.045082    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:10.045093    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:10.059701    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:11:10.059713    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:11:10.074795    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:10.074803    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:10.097231    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:11:10.097238    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:11:12.612913    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:17.615298    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:17.615871    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:17.646828    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:11:17.646929    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:17.664150    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:11:17.664243    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:17.677879    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:11:17.677955    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:17.689792    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:11:17.689865    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:17.700455    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:11:17.700527    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:17.711672    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:11:17.711746    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:17.721862    3510 logs.go:276] 0 containers: []
	W0213 15:11:17.721873    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:17.721935    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:17.732414    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:11:17.732429    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:11:17.732435    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:11:17.752964    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:17.752975    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:17.767864    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:17.767874    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:17.772398    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:11:17.772408    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:11:17.789299    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:11:17.789310    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:11:17.800588    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:11:17.800600    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:11:17.821535    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:11:17.821548    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:11:17.833230    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:17.833242    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:17.856962    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:17.856970    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:17.891478    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:11:17.891492    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:11:17.916920    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:11:17.916931    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:11:17.928812    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:11:17.928826    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:11:17.942535    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:11:17.942547    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:11:17.954691    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:11:17.954704    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:17.967723    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:11:17.967733    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:11:17.985467    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:11:17.985477    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:11:17.996897    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:11:17.996907    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:11:20.516034    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:25.518470    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:25.518655    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:25.536995    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:11:25.537097    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:25.551137    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:11:25.551212    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:25.563150    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:11:25.563219    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:25.573511    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:11:25.573587    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:25.583876    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:11:25.583943    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:25.594611    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:11:25.594684    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:25.605951    3510 logs.go:276] 0 containers: []
	W0213 15:11:25.605967    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:25.606027    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:25.621146    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:11:25.621163    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:11:25.621169    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:11:25.632667    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:25.632678    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:25.636940    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:11:25.636948    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:11:25.647916    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:11:25.647929    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:11:25.659597    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:11:25.659608    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:25.673334    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:11:25.673345    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:11:25.687891    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:11:25.687901    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:11:25.703456    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:11:25.703467    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:11:25.717674    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:11:25.717688    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:11:25.733170    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:11:25.733181    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:11:25.744972    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:25.744982    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:25.772484    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:25.772499    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:25.808341    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:11:25.808354    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:11:25.834888    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:11:25.834898    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:11:25.848489    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:25.848499    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:25.863211    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:11:25.863221    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:11:25.875345    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:11:25.875355    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:11:28.394824    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:33.396320    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:33.396482    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:33.408274    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:11:33.408352    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:33.418928    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:11:33.419001    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:33.429015    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:11:33.429082    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:33.441968    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:11:33.442039    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:33.452616    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:11:33.452695    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:33.463058    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:11:33.463125    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:33.473753    3510 logs.go:276] 0 containers: []
	W0213 15:11:33.473762    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:33.473821    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:33.484158    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:11:33.484173    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:33.484179    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:33.499448    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:33.499456    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:33.534802    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:11:33.534813    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:11:33.546628    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:11:33.546639    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:11:33.564656    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:11:33.564665    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:11:33.579936    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:11:33.579947    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:11:33.592312    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:33.592321    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:33.596771    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:11:33.596780    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:11:33.611675    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:11:33.611687    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:11:33.629527    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:11:33.629539    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:11:33.643286    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:11:33.643297    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:11:33.655910    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:11:33.655921    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:33.668424    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:11:33.668435    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:11:33.693769    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:11:33.693780    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:11:33.708061    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:11:33.708071    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:11:33.719392    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:11:33.719403    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:11:33.734899    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:33.734910    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:36.258619    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:41.260816    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:41.260971    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:41.274627    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:11:41.274716    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:41.286477    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:11:41.286548    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:41.296999    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:11:41.297071    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:41.307618    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:11:41.307692    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:41.317631    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:11:41.317701    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:41.328089    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:11:41.328170    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:41.338513    3510 logs.go:276] 0 containers: []
	W0213 15:11:41.338523    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:41.338575    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:41.357857    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:11:41.357872    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:41.357877    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:41.361797    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:11:41.361803    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:11:41.373244    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:11:41.373254    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:11:41.388794    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:11:41.388805    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:11:41.406608    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:11:41.406619    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:11:41.420534    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:41.420544    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:41.444061    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:41.444068    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:41.458950    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:11:41.458959    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:11:41.473248    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:11:41.473257    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:11:41.487212    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:11:41.487221    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:11:41.498997    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:41.499007    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:41.539969    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:11:41.539980    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:11:41.567561    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:11:41.567572    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:11:41.581797    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:11:41.581809    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:11:41.592820    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:11:41.592831    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:11:41.605738    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:11:41.605748    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:11:41.619217    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:11:41.619231    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:44.133371    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:49.134670    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:49.135082    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:49.178019    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:11:49.178180    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:49.196024    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:11:49.196154    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:49.213893    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:11:49.213975    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:49.227951    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:11:49.228036    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:49.256058    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:11:49.256148    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:49.272098    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:11:49.272191    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:49.290731    3510 logs.go:276] 0 containers: []
	W0213 15:11:49.290747    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:49.290823    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:49.305997    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:11:49.306017    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:49.306025    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:49.339888    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:11:49.339900    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:11:49.358981    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:11:49.358993    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:11:49.374299    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:11:49.374310    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:11:49.391789    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:11:49.391800    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:11:49.407134    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:11:49.407147    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:11:49.418989    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:11:49.419000    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:11:49.431360    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:49.431373    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:49.446867    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:11:49.446881    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:11:49.460786    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:11:49.460797    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:11:49.471892    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:11:49.471904    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:11:49.485723    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:11:49.485732    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:11:49.497007    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:49.497022    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:49.500902    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:11:49.500910    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:11:49.526416    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:11:49.526426    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:11:49.540227    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:49.540236    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:49.562873    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:11:49.562880    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:52.077660    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:11:57.080135    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:11:57.080357    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:11:57.111572    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:11:57.111698    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:11:57.129830    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:11:57.129940    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:11:57.149729    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:11:57.149815    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:11:57.160653    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:11:57.160725    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:11:57.171855    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:11:57.171916    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:11:57.182353    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:11:57.182432    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:11:57.192280    3510 logs.go:276] 0 containers: []
	W0213 15:11:57.192294    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:11:57.192356    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:11:57.208356    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:11:57.208374    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:11:57.208380    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:11:57.222766    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:11:57.222777    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:11:57.239112    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:11:57.239123    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:11:57.261500    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:11:57.261509    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:11:57.275373    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:11:57.275383    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:11:57.286140    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:11:57.286150    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:11:57.297354    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:11:57.297364    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:11:57.308762    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:11:57.308774    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:11:57.320115    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:11:57.320125    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:11:57.324843    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:11:57.324851    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:11:57.354877    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:11:57.354888    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:11:57.370134    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:11:57.370144    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:11:57.392549    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:11:57.392560    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:11:57.406367    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:11:57.406378    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:11:57.420920    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:11:57.420927    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:11:57.455140    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:11:57.455154    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:11:57.469466    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:11:57.469478    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:11:59.983635    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:12:04.985852    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:12:04.986006    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:12:05.000908    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:12:05.000992    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:12:05.011875    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:12:05.011942    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:12:05.022793    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:12:05.022857    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:12:05.032986    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:12:05.033055    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:12:05.043293    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:12:05.043359    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:12:05.054041    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:12:05.054117    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:12:05.063996    3510 logs.go:276] 0 containers: []
	W0213 15:12:05.064007    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:12:05.064070    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:12:05.073931    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:12:05.073946    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:12:05.073951    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:12:05.077994    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:12:05.077999    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:12:05.089394    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:12:05.089405    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:12:05.100454    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:12:05.100463    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:12:05.111933    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:12:05.111947    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:12:05.123582    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:12:05.123591    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:12:05.138735    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:12:05.138743    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:12:05.152532    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:12:05.152543    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:12:05.170674    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:12:05.170684    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:12:05.184265    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:12:05.184274    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:12:05.219068    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:12:05.219082    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:12:05.245370    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:12:05.245381    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:12:05.259831    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:12:05.259842    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:12:05.271662    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:12:05.271676    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:12:05.288045    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:12:05.288055    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:12:05.299653    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:12:05.299664    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:12:05.315118    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:12:05.315128    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:12:07.839600    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:12:12.841747    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:12:12.841889    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:12:12.854264    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:12:12.854332    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:12:12.864914    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:12:12.864972    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:12:12.875450    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:12:12.875524    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:12:12.885834    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:12:12.885899    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:12:12.895778    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:12:12.895846    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:12:12.906274    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:12:12.906351    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:12:12.915851    3510 logs.go:276] 0 containers: []
	W0213 15:12:12.915862    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:12:12.915921    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:12:12.926325    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:12:12.926340    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:12:12.926345    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:12:12.930523    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:12:12.930530    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:12:12.941861    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:12:12.941871    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:12:12.956969    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:12:12.956978    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:12:12.968358    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:12:12.968369    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:12:12.982240    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:12:12.982250    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:12:13.007455    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:12:13.007465    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:12:13.021580    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:12:13.021590    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:12:13.036066    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:12:13.036076    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:12:13.048072    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:12:13.048083    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:12:13.063590    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:12:13.063599    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:12:13.078363    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:12:13.078369    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:12:13.112519    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:12:13.112529    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:12:13.124299    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:12:13.124309    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:12:13.141634    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:12:13.141645    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:12:13.153932    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:12:13.153942    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:12:13.166132    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:12:13.166143    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:12:15.688752    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:12:20.690898    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:12:20.691072    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:12:20.704729    3510 logs.go:276] 2 containers: [bf9867a5c7c2 ea48366b9587]
	I0213 15:12:20.704812    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:12:20.715749    3510 logs.go:276] 2 containers: [4debea6fab24 c9bca2ddc84e]
	I0213 15:12:20.715824    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:12:20.726203    3510 logs.go:276] 1 containers: [ec6984d67105]
	I0213 15:12:20.726278    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:12:20.736202    3510 logs.go:276] 2 containers: [366f7b27eb91 414a8117b44a]
	I0213 15:12:20.736274    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:12:20.747050    3510 logs.go:276] 1 containers: [a5ec7c222427]
	I0213 15:12:20.747122    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:12:20.757702    3510 logs.go:276] 2 containers: [5942bd8b3ac5 2c330ef72602]
	I0213 15:12:20.757771    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:12:20.768207    3510 logs.go:276] 0 containers: []
	W0213 15:12:20.768217    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:12:20.768270    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:12:20.778618    3510 logs.go:276] 2 containers: [c5830c7f0238 65685aa935ee]
	I0213 15:12:20.778633    3510 logs.go:123] Gathering logs for kube-scheduler [414a8117b44a] ...
	I0213 15:12:20.778638    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414a8117b44a"
	I0213 15:12:20.793246    3510 logs.go:123] Gathering logs for kube-controller-manager [2c330ef72602] ...
	I0213 15:12:20.793258    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c330ef72602"
	I0213 15:12:20.807376    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:12:20.807387    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 15:12:20.823071    3510 logs.go:123] Gathering logs for kube-apiserver [bf9867a5c7c2] ...
	I0213 15:12:20.823080    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf9867a5c7c2"
	I0213 15:12:20.837935    3510 logs.go:123] Gathering logs for kube-scheduler [366f7b27eb91] ...
	I0213 15:12:20.837946    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 366f7b27eb91"
	I0213 15:12:20.849666    3510 logs.go:123] Gathering logs for kube-proxy [a5ec7c222427] ...
	I0213 15:12:20.849677    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5ec7c222427"
	I0213 15:12:20.861055    3510 logs.go:123] Gathering logs for kube-controller-manager [5942bd8b3ac5] ...
	I0213 15:12:20.861066    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5942bd8b3ac5"
	I0213 15:12:20.878386    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:12:20.878396    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:12:20.913335    3510 logs.go:123] Gathering logs for coredns [ec6984d67105] ...
	I0213 15:12:20.913346    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6984d67105"
	I0213 15:12:20.924286    3510 logs.go:123] Gathering logs for storage-provisioner [c5830c7f0238] ...
	I0213 15:12:20.924295    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5830c7f0238"
	I0213 15:12:20.935102    3510 logs.go:123] Gathering logs for storage-provisioner [65685aa935ee] ...
	I0213 15:12:20.935111    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65685aa935ee"
	I0213 15:12:20.946825    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:12:20.946836    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:12:20.959354    3510 logs.go:123] Gathering logs for kube-apiserver [ea48366b9587] ...
	I0213 15:12:20.959366    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea48366b9587"
	I0213 15:12:20.988922    3510 logs.go:123] Gathering logs for etcd [c9bca2ddc84e] ...
	I0213 15:12:20.988936    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9bca2ddc84e"
	I0213 15:12:21.003839    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:12:21.003850    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:12:21.025893    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:12:21.025900    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:12:21.030274    3510 logs.go:123] Gathering logs for etcd [4debea6fab24] ...
	I0213 15:12:21.030284    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4debea6fab24"
	I0213 15:12:23.545112    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:12:28.547322    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:12:28.547402    3510 kubeadm.go:640] restartCluster took 4m3.551788833s
	W0213 15:12:28.547470    3510 out.go:239] ! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
	! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
	I0213 15:12:28.547497    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0213 15:12:29.575667    3510 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.028179833s)
	I0213 15:12:29.576601    3510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 15:12:29.581405    3510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 15:12:29.584097    3510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 15:12:29.586935    3510 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 15:12:29.586947    3510 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 15:12:29.604915    3510 kubeadm.go:322] [init] Using Kubernetes version: v1.24.1
	I0213 15:12:29.604943    3510 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 15:12:29.660556    3510 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 15:12:29.660646    3510 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 15:12:29.660698    3510 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 15:12:29.710805    3510 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 15:12:29.716021    3510 out.go:204]   - Generating certificates and keys ...
	I0213 15:12:29.716057    3510 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 15:12:29.716088    3510 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 15:12:29.716142    3510 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 15:12:29.716177    3510 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 15:12:29.716227    3510 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 15:12:29.716261    3510 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 15:12:29.716291    3510 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 15:12:29.716318    3510 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 15:12:29.716399    3510 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 15:12:29.716431    3510 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 15:12:29.716450    3510 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 15:12:29.716487    3510 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 15:12:30.006392    3510 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 15:12:30.108294    3510 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 15:12:30.292606    3510 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 15:12:30.376294    3510 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 15:12:30.406590    3510 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 15:12:30.406912    3510 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 15:12:30.406950    3510 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 15:12:30.474911    3510 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 15:12:30.478845    3510 out.go:204]   - Booting up control plane ...
	I0213 15:12:30.478885    3510 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 15:12:30.478918    3510 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 15:12:30.478954    3510 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 15:12:30.478990    3510 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 15:12:30.479057    3510 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 15:12:34.976958    3510 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.501476 seconds
	I0213 15:12:34.977055    3510 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 15:12:34.984591    3510 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 15:12:35.493260    3510 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 15:12:35.493368    3510 kubeadm.go:322] [mark-control-plane] Marking the node stopped-upgrade-809000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 15:12:35.997459    3510 kubeadm.go:322] [bootstrap-token] Using token: moy1oe.z4h4igdjgcnkbsan
	I0213 15:12:36.003757    3510 out.go:204]   - Configuring RBAC rules ...
	I0213 15:12:36.003813    3510 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 15:12:36.003863    3510 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 15:12:36.007487    3510 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 15:12:36.008186    3510 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 15:12:36.009125    3510 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 15:12:36.010057    3510 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 15:12:36.013355    3510 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 15:12:36.166082    3510 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 15:12:36.401294    3510 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 15:12:36.401834    3510 kubeadm.go:322] 
	I0213 15:12:36.401867    3510 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 15:12:36.401870    3510 kubeadm.go:322] 
	I0213 15:12:36.401909    3510 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 15:12:36.401915    3510 kubeadm.go:322] 
	I0213 15:12:36.401926    3510 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 15:12:36.401964    3510 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 15:12:36.401997    3510 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 15:12:36.402000    3510 kubeadm.go:322] 
	I0213 15:12:36.402030    3510 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 15:12:36.402034    3510 kubeadm.go:322] 
	I0213 15:12:36.402065    3510 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 15:12:36.402071    3510 kubeadm.go:322] 
	I0213 15:12:36.402098    3510 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 15:12:36.402138    3510 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 15:12:36.402175    3510 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 15:12:36.402179    3510 kubeadm.go:322] 
	I0213 15:12:36.402222    3510 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 15:12:36.402271    3510 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 15:12:36.402275    3510 kubeadm.go:322] 
	I0213 15:12:36.402321    3510 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token moy1oe.z4h4igdjgcnkbsan \
	I0213 15:12:36.402374    3510 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59d33d1eb40ab9fa290fa08077c20062eca217e8d74d3cd8b9b4fd2d5d6aeb8d \
	I0213 15:12:36.402387    3510 kubeadm.go:322] 	--control-plane 
	I0213 15:12:36.402391    3510 kubeadm.go:322] 
	I0213 15:12:36.402442    3510 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 15:12:36.402446    3510 kubeadm.go:322] 
	I0213 15:12:36.402491    3510 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token moy1oe.z4h4igdjgcnkbsan \
	I0213 15:12:36.402545    3510 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59d33d1eb40ab9fa290fa08077c20062eca217e8d74d3cd8b9b4fd2d5d6aeb8d 
	I0213 15:12:36.402739    3510 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 15:12:36.402799    3510 cni.go:84] Creating CNI manager for ""
	I0213 15:12:36.402815    3510 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:12:36.405495    3510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 15:12:36.413609    3510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 15:12:36.416505    3510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 15:12:36.421473    3510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 15:12:36.421514    3510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 15:12:36.421522    3510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=fb52fe04bc8b044b129ef2ff27607d20a9fceb93 minikube.k8s.io/name=stopped-upgrade-809000 minikube.k8s.io/updated_at=2024_02_13T15_12_36_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 15:12:36.465318    3510 kubeadm.go:1088] duration metric: took 43.839584ms to wait for elevateKubeSystemPrivileges.
	I0213 15:12:36.465352    3510 ops.go:34] apiserver oom_adj: -16
	I0213 15:12:36.465366    3510 host.go:66] Checking if "stopped-upgrade-809000" exists ...
	I0213 15:12:36.466444    3510 main.go:141] libmachine: Using SSH client type: external
	I0213 15:12:36.466462    3510 main.go:141] libmachine: Using SSH private key: /Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/id_rsa (-rw-------)
	I0213 15:12:36.466478    3510 main.go:141] libmachine: &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/id_rsa -p 50309] /usr/bin/ssh <nil>}
	I0213 15:12:36.466490    3510 main.go:141] libmachine: /usr/bin/ssh -F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/id_rsa -p 50309 -f -NTL 50344:localhost:8443
	I0213 15:12:36.507538    3510 kubeadm.go:406] StartCluster complete in 4m11.567802208s
	I0213 15:12:36.507598    3510 settings.go:142] acquiring lock: {Name:mkdd6397441cfaf6d06a74b65d6ddefdb863237c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:12:36.507870    3510 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:12:36.508555    3510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/kubeconfig: {Name:mkf66d96abab1e512e6f2721c341e70e5b11c9ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:12:36.508905    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 15:12:36.508979    3510 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 15:12:36.509041    3510 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-809000"
	I0213 15:12:36.509055    3510 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-809000"
	W0213 15:12:36.509058    3510 addons.go:243] addon storage-provisioner should already be in state true
	I0213 15:12:36.509080    3510 host.go:66] Checking if "stopped-upgrade-809000" exists ...
	I0213 15:12:36.509093    3510 config.go:182] Loaded profile config "stopped-upgrade-809000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0213 15:12:36.509076    3510 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-809000"
	I0213 15:12:36.509159    3510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-809000"
	I0213 15:12:36.509276    3510 kapi.go:59] client config for stopped-upgrade-809000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/client.key", CAFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101777f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 15:12:36.513601    3510 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 15:12:36.516543    3510 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 15:12:36.516549    3510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 15:12:36.516557    3510 sshutil.go:53] new ssh client: &{IP:localhost Port:50309 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/id_rsa Username:docker}
	I0213 15:12:36.517782    3510 kapi.go:59] client config for stopped-upgrade-809000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/profiles/stopped-upgrade-809000/client.key", CAFile:"/Users/jenkins/minikube-integration/18170-979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101777f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 15:12:36.517908    3510 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-809000"
	W0213 15:12:36.517914    3510 addons.go:243] addon default-storageclass should already be in state true
	I0213 15:12:36.517925    3510 host.go:66] Checking if "stopped-upgrade-809000" exists ...
	I0213 15:12:36.518757    3510 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 15:12:36.518762    3510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 15:12:36.518769    3510 sshutil.go:53] new ssh client: &{IP:localhost Port:50309 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/stopped-upgrade-809000/id_rsa Username:docker}
	I0213 15:12:36.548394    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           10.0.2.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 15:12:36.563995    3510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 15:12:36.570596    3510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 15:12:37.010819    3510 start.go:929] {"host.minikube.internal": 10.0.2.2} host record injected into CoreDNS's ConfigMap
	W0213 15:13:06.511172    3510 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "stopped-upgrade-809000" context to 1 replicas: non-retryable failure while getting "coredns" deployment scale: Get "https://10.0.2.15:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 10.0.2.15:8443: i/o timeout
	E0213 15:13:06.511184    3510 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://10.0.2.15:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 10.0.2.15:8443: i/o timeout
	I0213 15:13:06.511194    3510 start.go:223] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:13:06.515885    3510 out.go:177] * Verifying Kubernetes components...
	I0213 15:13:06.519871    3510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 15:13:06.525254    3510 api_server.go:52] waiting for apiserver process to appear ...
	I0213 15:13:06.525329    3510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 15:13:06.530202    3510 api_server.go:72] duration metric: took 18.994ms to wait for apiserver process to appear ...
	I0213 15:13:06.530214    3510 api_server.go:88] waiting for apiserver healthz status ...
	I0213 15:13:06.530223    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0213 15:13:07.017037    3510 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0213 15:13:07.021208    3510 out.go:177] * Enabled addons: storage-provisioner
	I0213 15:13:07.029125    3510 addons.go:505] enable addons completed in 30.520804958s: enabled=[storage-provisioner]
	I0213 15:13:11.532226    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:11.532244    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:16.532379    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:16.532403    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:21.532561    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:21.532591    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:26.532838    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:26.532861    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:31.533251    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:31.533294    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:36.533585    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:36.533611    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:41.534232    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:41.534279    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:46.535524    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:46.535547    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:51.536694    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:51.536743    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:13:56.537636    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:13:56.537659    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:14:01.539432    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:14:01.539474    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:14:06.541607    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:14:06.541743    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:14:06.558661    3510 logs.go:276] 1 containers: [5e51f2323c75]
	I0213 15:14:06.558755    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:14:06.572199    3510 logs.go:276] 1 containers: [c7ccbfc9da3f]
	I0213 15:14:06.572275    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:14:06.583642    3510 logs.go:276] 2 containers: [c39f02d73180 82cfef7f8576]
	I0213 15:14:06.583704    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:14:06.594629    3510 logs.go:276] 1 containers: [6bd553391f1b]
	I0213 15:14:06.594702    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:14:06.605367    3510 logs.go:276] 1 containers: [1a94bf610354]
	I0213 15:14:06.605445    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:14:06.616271    3510 logs.go:276] 1 containers: [2cc58a5453f6]
	I0213 15:14:06.616345    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:14:06.626922    3510 logs.go:276] 0 containers: []
	W0213 15:14:06.626933    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:14:06.626995    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:14:06.637645    3510 logs.go:276] 1 containers: [06e60b7523c4]
	I0213 15:14:06.637660    3510 logs.go:123] Gathering logs for coredns [82cfef7f8576] ...
	I0213 15:14:06.637665    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cfef7f8576"
	I0213 15:14:06.649524    3510 logs.go:123] Gathering logs for kube-controller-manager [2cc58a5453f6] ...
	I0213 15:14:06.649535    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cc58a5453f6"
	I0213 15:14:06.667832    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:14:06.667843    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:14:06.672445    3510 logs.go:123] Gathering logs for coredns [c39f02d73180] ...
	I0213 15:14:06.672455    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39f02d73180"
	I0213 15:14:06.684132    3510 logs.go:123] Gathering logs for kube-scheduler [6bd553391f1b] ...
	I0213 15:14:06.684143    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bd553391f1b"
	I0213 15:14:06.699466    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:14:06.699481    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:14:06.723962    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:14:06.723974    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 15:14:06.754945    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:14:06.755043    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:14:06.755949    3510 logs.go:123] Gathering logs for etcd [c7ccbfc9da3f] ...
	I0213 15:14:06.755954    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7ccbfc9da3f"
	I0213 15:14:06.770473    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:14:06.770483    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:14:06.804875    3510 logs.go:123] Gathering logs for kube-proxy [1a94bf610354] ...
	I0213 15:14:06.804886    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a94bf610354"
	I0213 15:14:06.820955    3510 logs.go:123] Gathering logs for storage-provisioner [06e60b7523c4] ...
	I0213 15:14:06.820964    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06e60b7523c4"
	I0213 15:14:06.832744    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:14:06.832753    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:14:06.844692    3510 logs.go:123] Gathering logs for kube-apiserver [5e51f2323c75] ...
	I0213 15:14:06.844704    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e51f2323c75"
	I0213 15:14:06.858948    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:14:06.858962    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 15:14:06.858987    3510 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0213 15:14:06.858990    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	  Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:14:06.858994    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	  Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:14:06.858999    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:14:06.859002    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:14:16.862910    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:14:21.865074    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:14:21.865180    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:14:21.876223    3510 logs.go:276] 1 containers: [5e51f2323c75]
	I0213 15:14:21.876296    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:14:21.886858    3510 logs.go:276] 1 containers: [c7ccbfc9da3f]
	I0213 15:14:21.886926    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:14:21.897405    3510 logs.go:276] 2 containers: [c39f02d73180 82cfef7f8576]
	I0213 15:14:21.897472    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:14:21.908929    3510 logs.go:276] 1 containers: [6bd553391f1b]
	I0213 15:14:21.908989    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:14:21.919181    3510 logs.go:276] 1 containers: [1a94bf610354]
	I0213 15:14:21.919249    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:14:21.929709    3510 logs.go:276] 1 containers: [2cc58a5453f6]
	I0213 15:14:21.929781    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:14:21.942503    3510 logs.go:276] 0 containers: []
	W0213 15:14:21.942515    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:14:21.942571    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:14:21.953220    3510 logs.go:276] 1 containers: [06e60b7523c4]
	I0213 15:14:21.953243    3510 logs.go:123] Gathering logs for coredns [82cfef7f8576] ...
	I0213 15:14:21.953248    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cfef7f8576"
	I0213 15:14:21.965192    3510 logs.go:123] Gathering logs for coredns [c39f02d73180] ...
	I0213 15:14:21.965201    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39f02d73180"
	I0213 15:14:21.977013    3510 logs.go:123] Gathering logs for kube-scheduler [6bd553391f1b] ...
	I0213 15:14:21.977023    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bd553391f1b"
	I0213 15:14:21.993706    3510 logs.go:123] Gathering logs for storage-provisioner [06e60b7523c4] ...
	I0213 15:14:21.993716    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06e60b7523c4"
	I0213 15:14:22.005177    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:14:22.005188    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:14:22.028571    3510 logs.go:123] Gathering logs for kube-apiserver [5e51f2323c75] ...
	I0213 15:14:22.028580    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e51f2323c75"
	I0213 15:14:22.044129    3510 logs.go:123] Gathering logs for etcd [c7ccbfc9da3f] ...
	I0213 15:14:22.044139    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7ccbfc9da3f"
	I0213 15:14:22.061672    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:14:22.061685    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:14:22.073160    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:14:22.073168    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:14:22.077600    3510 logs.go:123] Gathering logs for kube-controller-manager [2cc58a5453f6] ...
	I0213 15:14:22.077606    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cc58a5453f6"
	I0213 15:14:22.095108    3510 logs.go:123] Gathering logs for kube-proxy [1a94bf610354] ...
	I0213 15:14:22.095118    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a94bf610354"
	I0213 15:14:22.107272    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:14:22.107282    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 15:14:22.139523    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:14:22.139616    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:14:22.140548    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:14:22.140558    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:14:22.174166    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:14:22.174177    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 15:14:22.174203    3510 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0213 15:14:22.174208    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	  Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:14:22.174212    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	  Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:14:22.174216    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:14:22.174219    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:14:32.176704    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:14:37.179085    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:14:37.179259    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:14:37.198743    3510 logs.go:276] 1 containers: [5e51f2323c75]
	I0213 15:14:37.198864    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:14:37.212973    3510 logs.go:276] 1 containers: [c7ccbfc9da3f]
	I0213 15:14:37.213047    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:14:37.225378    3510 logs.go:276] 2 containers: [c39f02d73180 82cfef7f8576]
	I0213 15:14:37.225441    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:14:37.236161    3510 logs.go:276] 1 containers: [6bd553391f1b]
	I0213 15:14:37.236237    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:14:37.247106    3510 logs.go:276] 1 containers: [1a94bf610354]
	I0213 15:14:37.247172    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:14:37.257229    3510 logs.go:276] 1 containers: [2cc58a5453f6]
	I0213 15:14:37.257295    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:14:37.267591    3510 logs.go:276] 0 containers: []
	W0213 15:14:37.267604    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:14:37.267658    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:14:37.277625    3510 logs.go:276] 1 containers: [06e60b7523c4]
	I0213 15:14:37.277642    3510 logs.go:123] Gathering logs for storage-provisioner [06e60b7523c4] ...
	I0213 15:14:37.277647    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06e60b7523c4"
	I0213 15:14:37.289327    3510 logs.go:123] Gathering logs for coredns [c39f02d73180] ...
	I0213 15:14:37.289336    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39f02d73180"
	I0213 15:14:37.301094    3510 logs.go:123] Gathering logs for coredns [82cfef7f8576] ...
	I0213 15:14:37.301105    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cfef7f8576"
	I0213 15:14:37.312913    3510 logs.go:123] Gathering logs for kube-scheduler [6bd553391f1b] ...
	I0213 15:14:37.312925    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bd553391f1b"
	I0213 15:14:37.327602    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:14:37.327611    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:14:37.331837    3510 logs.go:123] Gathering logs for etcd [c7ccbfc9da3f] ...
	I0213 15:14:37.331846    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7ccbfc9da3f"
	I0213 15:14:37.345689    3510 logs.go:123] Gathering logs for kube-apiserver [5e51f2323c75] ...
	I0213 15:14:37.345698    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e51f2323c75"
	I0213 15:14:37.359753    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:14:37.359765    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:14:37.384571    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:14:37.384581    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:14:37.396795    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:14:37.396805    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 15:14:37.427506    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:14:37.427602    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:14:37.428481    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:14:37.428485    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:14:37.462550    3510 logs.go:123] Gathering logs for kube-proxy [1a94bf610354] ...
	I0213 15:14:37.462563    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a94bf610354"
	I0213 15:14:37.474532    3510 logs.go:123] Gathering logs for kube-controller-manager [2cc58a5453f6] ...
	I0213 15:14:37.474545    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cc58a5453f6"
	I0213 15:14:37.492516    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:14:37.492525    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 15:14:37.492548    3510 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0213 15:14:37.492552    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	  Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:14:37.492555    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	  Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:14:37.492558    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:14:37.492564    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:14:47.496004    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:14:52.498692    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:14:52.499040    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:14:52.533152    3510 logs.go:276] 1 containers: [5e51f2323c75]
	I0213 15:14:52.533272    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:14:52.561538    3510 logs.go:276] 1 containers: [c7ccbfc9da3f]
	I0213 15:14:52.561643    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:14:52.574889    3510 logs.go:276] 4 containers: [c22661f1d8e2 e7708377582a c39f02d73180 82cfef7f8576]
	I0213 15:14:52.574963    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:14:52.585915    3510 logs.go:276] 1 containers: [6bd553391f1b]
	I0213 15:14:52.585992    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:14:52.596522    3510 logs.go:276] 1 containers: [1a94bf610354]
	I0213 15:14:52.596585    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:14:52.606631    3510 logs.go:276] 1 containers: [2cc58a5453f6]
	I0213 15:14:52.606689    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:14:52.616400    3510 logs.go:276] 0 containers: []
	W0213 15:14:52.616412    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:14:52.616471    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:14:52.626872    3510 logs.go:276] 1 containers: [06e60b7523c4]
	I0213 15:14:52.626894    3510 logs.go:123] Gathering logs for etcd [c7ccbfc9da3f] ...
	I0213 15:14:52.626899    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7ccbfc9da3f"
	I0213 15:14:52.641128    3510 logs.go:123] Gathering logs for storage-provisioner [06e60b7523c4] ...
	I0213 15:14:52.641139    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06e60b7523c4"
	I0213 15:14:52.653545    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:14:52.653556    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:14:52.665517    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:14:52.665527    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 15:14:52.697799    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:14:52.697890    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:14:52.698763    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:14:52.698767    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:14:52.703158    3510 logs.go:123] Gathering logs for kube-apiserver [5e51f2323c75] ...
	I0213 15:14:52.703164    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e51f2323c75"
	I0213 15:14:52.717121    3510 logs.go:123] Gathering logs for coredns [c22661f1d8e2] ...
	I0213 15:14:52.717134    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c22661f1d8e2"
	I0213 15:14:52.728554    3510 logs.go:123] Gathering logs for kube-scheduler [6bd553391f1b] ...
	I0213 15:14:52.728573    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bd553391f1b"
	I0213 15:14:52.743367    3510 logs.go:123] Gathering logs for coredns [82cfef7f8576] ...
	I0213 15:14:52.743380    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cfef7f8576"
	I0213 15:14:52.755522    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:14:52.755535    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:14:52.780640    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:14:52.780646    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:14:52.815013    3510 logs.go:123] Gathering logs for coredns [e7708377582a] ...
	I0213 15:14:52.815025    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7708377582a"
	I0213 15:14:52.826458    3510 logs.go:123] Gathering logs for coredns [c39f02d73180] ...
	I0213 15:14:52.826468    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39f02d73180"
	I0213 15:14:52.847892    3510 logs.go:123] Gathering logs for kube-proxy [1a94bf610354] ...
	I0213 15:14:52.847900    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a94bf610354"
	I0213 15:14:52.859932    3510 logs.go:123] Gathering logs for kube-controller-manager [2cc58a5453f6] ...
	I0213 15:14:52.859940    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cc58a5453f6"
	I0213 15:14:52.879004    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:14:52.879012    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 15:14:52.879039    3510 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0213 15:14:52.879043    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	  Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:14:52.879046    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	  Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:14:52.879050    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:14:52.879054    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:15:02.883103    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:15:07.885665    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:15:07.886005    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:15:07.917330    3510 logs.go:276] 1 containers: [5e51f2323c75]
	I0213 15:15:07.917457    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:15:07.936136    3510 logs.go:276] 1 containers: [c7ccbfc9da3f]
	I0213 15:15:07.936233    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:15:07.950483    3510 logs.go:276] 4 containers: [c22661f1d8e2 e7708377582a c39f02d73180 82cfef7f8576]
	I0213 15:15:07.950560    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:15:07.963396    3510 logs.go:276] 1 containers: [6bd553391f1b]
	I0213 15:15:07.963468    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:15:07.973737    3510 logs.go:276] 1 containers: [1a94bf610354]
	I0213 15:15:07.973795    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:15:07.984327    3510 logs.go:276] 1 containers: [2cc58a5453f6]
	I0213 15:15:07.984396    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:15:07.994701    3510 logs.go:276] 0 containers: []
	W0213 15:15:07.994711    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:15:07.994768    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:15:08.005626    3510 logs.go:276] 1 containers: [06e60b7523c4]
	I0213 15:15:08.005639    3510 logs.go:123] Gathering logs for kube-apiserver [5e51f2323c75] ...
	I0213 15:15:08.005643    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e51f2323c75"
	I0213 15:15:08.019629    3510 logs.go:123] Gathering logs for coredns [82cfef7f8576] ...
	I0213 15:15:08.019639    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cfef7f8576"
	I0213 15:15:08.031415    3510 logs.go:123] Gathering logs for kube-controller-manager [2cc58a5453f6] ...
	I0213 15:15:08.031425    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cc58a5453f6"
	I0213 15:15:08.052543    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:15:08.052553    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:15:08.076267    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:15:08.076273    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:15:08.088361    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:15:08.088373    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:15:08.092553    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:15:08.092559    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:15:08.126414    3510 logs.go:123] Gathering logs for coredns [c22661f1d8e2] ...
	I0213 15:15:08.126426    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c22661f1d8e2"
	I0213 15:15:08.137906    3510 logs.go:123] Gathering logs for kube-scheduler [6bd553391f1b] ...
	I0213 15:15:08.137918    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bd553391f1b"
	I0213 15:15:08.152552    3510 logs.go:123] Gathering logs for storage-provisioner [06e60b7523c4] ...
	I0213 15:15:08.152563    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06e60b7523c4"
	I0213 15:15:08.164503    3510 logs.go:123] Gathering logs for kube-proxy [1a94bf610354] ...
	I0213 15:15:08.164514    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a94bf610354"
	I0213 15:15:08.176663    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:15:08.176674    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 15:15:08.208704    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:15:08.208800    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:15:08.209676    3510 logs.go:123] Gathering logs for etcd [c7ccbfc9da3f] ...
	I0213 15:15:08.209680    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7ccbfc9da3f"
	I0213 15:15:08.223227    3510 logs.go:123] Gathering logs for coredns [e7708377582a] ...
	I0213 15:15:08.223236    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7708377582a"
	I0213 15:15:08.234571    3510 logs.go:123] Gathering logs for coredns [c39f02d73180] ...
	I0213 15:15:08.234581    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39f02d73180"
	I0213 15:15:08.250235    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:15:08.250246    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 15:15:08.250270    3510 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0213 15:15:08.250274    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	  Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:15:08.250278    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	  Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:15:08.250281    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:15:08.250284    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:15:18.254341    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:15:23.256894    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:15:23.257286    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:15:23.292143    3510 logs.go:276] 1 containers: [5e51f2323c75]
	I0213 15:15:23.292275    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:15:23.313156    3510 logs.go:276] 1 containers: [c7ccbfc9da3f]
	I0213 15:15:23.313255    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:15:23.328186    3510 logs.go:276] 4 containers: [c22661f1d8e2 e7708377582a c39f02d73180 82cfef7f8576]
	I0213 15:15:23.328251    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:15:23.340355    3510 logs.go:276] 1 containers: [6bd553391f1b]
	I0213 15:15:23.340429    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:15:23.351595    3510 logs.go:276] 1 containers: [1a94bf610354]
	I0213 15:15:23.351672    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:15:23.362278    3510 logs.go:276] 1 containers: [2cc58a5453f6]
	I0213 15:15:23.362348    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:15:23.372088    3510 logs.go:276] 0 containers: []
	W0213 15:15:23.372098    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:15:23.372158    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:15:23.382565    3510 logs.go:276] 1 containers: [06e60b7523c4]
	I0213 15:15:23.382580    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:15:23.382584    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:15:23.395839    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:15:23.395852    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 15:15:23.428285    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:15:23.428378    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:15:23.429279    3510 logs.go:123] Gathering logs for coredns [c22661f1d8e2] ...
	I0213 15:15:23.429284    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c22661f1d8e2"
	I0213 15:15:23.441713    3510 logs.go:123] Gathering logs for coredns [82cfef7f8576] ...
	I0213 15:15:23.441727    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cfef7f8576"
	I0213 15:15:23.453904    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:15:23.453915    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:15:23.478683    3510 logs.go:123] Gathering logs for kube-apiserver [5e51f2323c75] ...
	I0213 15:15:23.478691    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e51f2323c75"
	I0213 15:15:23.493329    3510 logs.go:123] Gathering logs for etcd [c7ccbfc9da3f] ...
	I0213 15:15:23.493339    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7ccbfc9da3f"
	I0213 15:15:23.507565    3510 logs.go:123] Gathering logs for coredns [e7708377582a] ...
	I0213 15:15:23.507576    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7708377582a"
	I0213 15:15:23.519569    3510 logs.go:123] Gathering logs for coredns [c39f02d73180] ...
	I0213 15:15:23.519580    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39f02d73180"
	I0213 15:15:23.534178    3510 logs.go:123] Gathering logs for kube-proxy [1a94bf610354] ...
	I0213 15:15:23.534189    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a94bf610354"
	I0213 15:15:23.545893    3510 logs.go:123] Gathering logs for storage-provisioner [06e60b7523c4] ...
	I0213 15:15:23.545905    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06e60b7523c4"
	I0213 15:15:23.557859    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:15:23.557869    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:15:23.561918    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:15:23.561923    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:15:23.595721    3510 logs.go:123] Gathering logs for kube-scheduler [6bd553391f1b] ...
	I0213 15:15:23.595730    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bd553391f1b"
	I0213 15:15:23.612063    3510 logs.go:123] Gathering logs for kube-controller-manager [2cc58a5453f6] ...
	I0213 15:15:23.612073    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cc58a5453f6"
	I0213 15:15:23.634207    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:15:23.634216    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 15:15:23.634240    3510 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0213 15:15:23.634246    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	  Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:15:23.634250    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	  Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:15:23.634253    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:15:23.634257    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:15:33.638292    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:15:38.640867    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:15:38.641037    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:15:38.663952    3510 logs.go:276] 1 containers: [5e51f2323c75]
	I0213 15:15:38.664049    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:15:38.679563    3510 logs.go:276] 1 containers: [c7ccbfc9da3f]
	I0213 15:15:38.679649    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:15:38.692128    3510 logs.go:276] 4 containers: [c22661f1d8e2 e7708377582a c39f02d73180 82cfef7f8576]
	I0213 15:15:38.692191    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:15:38.703171    3510 logs.go:276] 1 containers: [6bd553391f1b]
	I0213 15:15:38.703236    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:15:38.714693    3510 logs.go:276] 1 containers: [1a94bf610354]
	I0213 15:15:38.714764    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:15:38.728563    3510 logs.go:276] 1 containers: [2cc58a5453f6]
	I0213 15:15:38.728631    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:15:38.738341    3510 logs.go:276] 0 containers: []
	W0213 15:15:38.738351    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:15:38.738406    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:15:38.748829    3510 logs.go:276] 1 containers: [06e60b7523c4]
	I0213 15:15:38.748844    3510 logs.go:123] Gathering logs for coredns [82cfef7f8576] ...
	I0213 15:15:38.748852    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cfef7f8576"
	I0213 15:15:38.760475    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:15:38.760486    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:15:38.784009    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:15:38.784015    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:15:38.818023    3510 logs.go:123] Gathering logs for kube-apiserver [5e51f2323c75] ...
	I0213 15:15:38.818032    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e51f2323c75"
	I0213 15:15:38.832352    3510 logs.go:123] Gathering logs for storage-provisioner [06e60b7523c4] ...
	I0213 15:15:38.832361    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06e60b7523c4"
	I0213 15:15:38.843687    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:15:38.843698    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:15:38.854902    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:15:38.854913    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:15:38.859495    3510 logs.go:123] Gathering logs for coredns [c22661f1d8e2] ...
	I0213 15:15:38.859502    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c22661f1d8e2"
	I0213 15:15:38.871715    3510 logs.go:123] Gathering logs for kube-proxy [1a94bf610354] ...
	I0213 15:15:38.871728    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a94bf610354"
	I0213 15:15:38.890921    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:15:38.890932    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 15:15:38.921968    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:15:38.922059    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:15:38.922941    3510 logs.go:123] Gathering logs for etcd [c7ccbfc9da3f] ...
	I0213 15:15:38.922944    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7ccbfc9da3f"
	I0213 15:15:38.936642    3510 logs.go:123] Gathering logs for coredns [e7708377582a] ...
	I0213 15:15:38.936652    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7708377582a"
	I0213 15:15:38.948537    3510 logs.go:123] Gathering logs for coredns [c39f02d73180] ...
	I0213 15:15:38.948546    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39f02d73180"
	I0213 15:15:38.960623    3510 logs.go:123] Gathering logs for kube-scheduler [6bd553391f1b] ...
	I0213 15:15:38.960632    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bd553391f1b"
	I0213 15:15:38.975685    3510 logs.go:123] Gathering logs for kube-controller-manager [2cc58a5453f6] ...
	I0213 15:15:38.975694    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cc58a5453f6"
	I0213 15:15:38.993388    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:15:38.993397    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 15:15:38.993421    3510 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0213 15:15:38.993425    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	  Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:15:38.993429    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	  Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:15:38.993433    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:15:38.993436    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:15:48.997010    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:15:53.999269    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:15:53.999735    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:15:54.036553    3510 logs.go:276] 1 containers: [5e51f2323c75]
	I0213 15:15:54.036700    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:15:54.059254    3510 logs.go:276] 1 containers: [c7ccbfc9da3f]
	I0213 15:15:54.059368    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:15:54.074847    3510 logs.go:276] 4 containers: [c22661f1d8e2 e7708377582a c39f02d73180 82cfef7f8576]
	I0213 15:15:54.074926    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:15:54.087668    3510 logs.go:276] 1 containers: [6bd553391f1b]
	I0213 15:15:54.087737    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:15:54.098328    3510 logs.go:276] 1 containers: [1a94bf610354]
	I0213 15:15:54.098394    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:15:54.109074    3510 logs.go:276] 1 containers: [2cc58a5453f6]
	I0213 15:15:54.109134    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:15:54.119699    3510 logs.go:276] 0 containers: []
	W0213 15:15:54.119710    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:15:54.119767    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:15:54.130161    3510 logs.go:276] 1 containers: [06e60b7523c4]
	I0213 15:15:54.130176    3510 logs.go:123] Gathering logs for kube-controller-manager [2cc58a5453f6] ...
	I0213 15:15:54.130181    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cc58a5453f6"
	I0213 15:15:54.148076    3510 logs.go:123] Gathering logs for coredns [c22661f1d8e2] ...
	I0213 15:15:54.148088    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c22661f1d8e2"
	I0213 15:15:54.165708    3510 logs.go:123] Gathering logs for storage-provisioner [06e60b7523c4] ...
	I0213 15:15:54.165718    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06e60b7523c4"
	I0213 15:15:54.176883    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:15:54.176894    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:15:54.211532    3510 logs.go:123] Gathering logs for kube-apiserver [5e51f2323c75] ...
	I0213 15:15:54.211543    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e51f2323c75"
	I0213 15:15:54.226035    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:15:54.226045    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:15:54.237835    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:15:54.237846    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 15:15:54.269758    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:15:54.269852    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:15:54.270729    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:15:54.270734    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:15:54.275234    3510 logs.go:123] Gathering logs for etcd [c7ccbfc9da3f] ...
	I0213 15:15:54.275242    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7ccbfc9da3f"
	I0213 15:15:54.289276    3510 logs.go:123] Gathering logs for coredns [e7708377582a] ...
	I0213 15:15:54.289285    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7708377582a"
	I0213 15:15:54.303404    3510 logs.go:123] Gathering logs for coredns [c39f02d73180] ...
	I0213 15:15:54.303415    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39f02d73180"
	I0213 15:15:54.322303    3510 logs.go:123] Gathering logs for coredns [82cfef7f8576] ...
	I0213 15:15:54.322313    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cfef7f8576"
	I0213 15:15:54.338976    3510 logs.go:123] Gathering logs for kube-scheduler [6bd553391f1b] ...
	I0213 15:15:54.338987    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bd553391f1b"
	I0213 15:15:54.353940    3510 logs.go:123] Gathering logs for kube-proxy [1a94bf610354] ...
	I0213 15:15:54.353951    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a94bf610354"
	I0213 15:15:54.365393    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:15:54.365402    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:15:54.389818    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:15:54.389827    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 15:15:54.389850    3510 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0213 15:15:54.389854    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	  Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:15:54.389858    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	  Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:15:54.389865    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:15:54.389868    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:16:04.393849    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:16:09.395104    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:16:09.395507    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:16:09.432927    3510 logs.go:276] 1 containers: [5e51f2323c75]
	I0213 15:16:09.433062    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:16:09.454125    3510 logs.go:276] 1 containers: [c7ccbfc9da3f]
	I0213 15:16:09.454231    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:16:09.469072    3510 logs.go:276] 4 containers: [c22661f1d8e2 e7708377582a c39f02d73180 82cfef7f8576]
	I0213 15:16:09.469155    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:16:09.481626    3510 logs.go:276] 1 containers: [6bd553391f1b]
	I0213 15:16:09.481697    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:16:09.492246    3510 logs.go:276] 1 containers: [1a94bf610354]
	I0213 15:16:09.492319    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:16:09.502552    3510 logs.go:276] 1 containers: [2cc58a5453f6]
	I0213 15:16:09.502614    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:16:09.512565    3510 logs.go:276] 0 containers: []
	W0213 15:16:09.512575    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:16:09.512637    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:16:09.523103    3510 logs.go:276] 1 containers: [06e60b7523c4]
	I0213 15:16:09.523118    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:16:09.523123    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:16:09.527733    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:16:09.527742    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:16:09.563879    3510 logs.go:123] Gathering logs for etcd [c7ccbfc9da3f] ...
	I0213 15:16:09.563890    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7ccbfc9da3f"
	I0213 15:16:09.578047    3510 logs.go:123] Gathering logs for coredns [c22661f1d8e2] ...
	I0213 15:16:09.578059    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c22661f1d8e2"
	I0213 15:16:09.594790    3510 logs.go:123] Gathering logs for coredns [82cfef7f8576] ...
	I0213 15:16:09.594804    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cfef7f8576"
	I0213 15:16:09.608733    3510 logs.go:123] Gathering logs for kube-controller-manager [2cc58a5453f6] ...
	I0213 15:16:09.608748    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cc58a5453f6"
	I0213 15:16:09.628303    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:16:09.628322    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 15:16:09.661671    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:16:09.661774    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:16:09.662686    3510 logs.go:123] Gathering logs for coredns [e7708377582a] ...
	I0213 15:16:09.662694    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7708377582a"
	I0213 15:16:09.676817    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:16:09.676832    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:16:09.689760    3510 logs.go:123] Gathering logs for kube-scheduler [6bd553391f1b] ...
	I0213 15:16:09.689773    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bd553391f1b"
	I0213 15:16:09.706478    3510 logs.go:123] Gathering logs for kube-proxy [1a94bf610354] ...
	I0213 15:16:09.706502    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a94bf610354"
	I0213 15:16:09.721958    3510 logs.go:123] Gathering logs for storage-provisioner [06e60b7523c4] ...
	I0213 15:16:09.721968    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06e60b7523c4"
	I0213 15:16:09.733605    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:16:09.733616    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:16:09.757676    3510 logs.go:123] Gathering logs for kube-apiserver [5e51f2323c75] ...
	I0213 15:16:09.757686    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e51f2323c75"
	I0213 15:16:09.772375    3510 logs.go:123] Gathering logs for coredns [c39f02d73180] ...
	I0213 15:16:09.772386    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39f02d73180"
	I0213 15:16:09.791647    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:16:09.791658    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 15:16:09.791683    3510 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0213 15:16:09.791688    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	  Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:16:09.791692    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	  Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:16:09.791697    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:16:09.791700    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:16:19.795001    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:16:24.796439    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:16:24.796534    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:16:24.808576    3510 logs.go:276] 1 containers: [5e51f2323c75]
	I0213 15:16:24.808662    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:16:24.828941    3510 logs.go:276] 1 containers: [c7ccbfc9da3f]
	I0213 15:16:24.829005    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:16:24.845113    3510 logs.go:276] 4 containers: [c22661f1d8e2 e7708377582a c39f02d73180 82cfef7f8576]
	I0213 15:16:24.845192    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:16:24.861786    3510 logs.go:276] 1 containers: [6bd553391f1b]
	I0213 15:16:24.861879    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:16:24.872829    3510 logs.go:276] 1 containers: [1a94bf610354]
	I0213 15:16:24.872899    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:16:24.892095    3510 logs.go:276] 1 containers: [2cc58a5453f6]
	I0213 15:16:24.892182    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:16:24.903077    3510 logs.go:276] 0 containers: []
	W0213 15:16:24.903088    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:16:24.903148    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:16:24.914485    3510 logs.go:276] 1 containers: [06e60b7523c4]
	I0213 15:16:24.914502    3510 logs.go:123] Gathering logs for coredns [c22661f1d8e2] ...
	I0213 15:16:24.914508    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c22661f1d8e2"
	I0213 15:16:24.927740    3510 logs.go:123] Gathering logs for coredns [82cfef7f8576] ...
	I0213 15:16:24.927751    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cfef7f8576"
	I0213 15:16:24.941186    3510 logs.go:123] Gathering logs for kube-proxy [1a94bf610354] ...
	I0213 15:16:24.941198    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a94bf610354"
	I0213 15:16:24.954545    3510 logs.go:123] Gathering logs for coredns [c39f02d73180] ...
	I0213 15:16:24.954557    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c39f02d73180"
	I0213 15:16:24.972649    3510 logs.go:123] Gathering logs for kube-controller-manager [2cc58a5453f6] ...
	I0213 15:16:24.972666    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cc58a5453f6"
	I0213 15:16:24.992134    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:16:24.992151    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:16:25.017700    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:16:25.017714    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:16:25.030690    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:16:25.030700    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 15:16:25.064492    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:16:25.064593    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:16:25.065527    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:16:25.065536    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:16:25.103078    3510 logs.go:123] Gathering logs for kube-apiserver [5e51f2323c75] ...
	I0213 15:16:25.103088    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e51f2323c75"
	I0213 15:16:25.119041    3510 logs.go:123] Gathering logs for etcd [c7ccbfc9da3f] ...
	I0213 15:16:25.119052    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7ccbfc9da3f"
	I0213 15:16:25.135261    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:16:25.135272    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:16:25.140023    3510 logs.go:123] Gathering logs for coredns [e7708377582a] ...
	I0213 15:16:25.140029    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7708377582a"
	I0213 15:16:25.152760    3510 logs.go:123] Gathering logs for kube-scheduler [6bd553391f1b] ...
	I0213 15:16:25.152772    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bd553391f1b"
	I0213 15:16:25.169294    3510 logs.go:123] Gathering logs for storage-provisioner [06e60b7523c4] ...
	I0213 15:16:25.169307    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06e60b7523c4"
	I0213 15:16:25.191168    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:16:25.191175    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 15:16:25.191195    3510 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0213 15:16:25.191199    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	  Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:16:25.191202    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	  Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:16:25.191206    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:16:25.191209    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:16:35.194060    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:16:40.196296    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:16:40.196483    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:16:40.208614    3510 logs.go:276] 1 containers: [5e51f2323c75]
	I0213 15:16:40.208694    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:16:40.221832    3510 logs.go:276] 1 containers: [c7ccbfc9da3f]
	I0213 15:16:40.221904    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:16:40.233015    3510 logs.go:276] 4 containers: [7c9bd59b46ca 1b82b884ae8b c22661f1d8e2 e7708377582a]
	I0213 15:16:40.233089    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:16:40.245436    3510 logs.go:276] 1 containers: [6bd553391f1b]
	I0213 15:16:40.245509    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:16:40.255707    3510 logs.go:276] 1 containers: [1a94bf610354]
	I0213 15:16:40.255778    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:16:40.265629    3510 logs.go:276] 1 containers: [2cc58a5453f6]
	I0213 15:16:40.265700    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:16:40.275826    3510 logs.go:276] 0 containers: []
	W0213 15:16:40.275840    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:16:40.275902    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:16:40.286154    3510 logs.go:276] 1 containers: [06e60b7523c4]
	I0213 15:16:40.286172    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:16:40.286177    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:16:40.290966    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:16:40.290975    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:16:40.326305    3510 logs.go:123] Gathering logs for kube-apiserver [5e51f2323c75] ...
	I0213 15:16:40.326319    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e51f2323c75"
	I0213 15:16:40.344263    3510 logs.go:123] Gathering logs for coredns [7c9bd59b46ca] ...
	I0213 15:16:40.344275    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9bd59b46ca"
	I0213 15:16:40.355663    3510 logs.go:123] Gathering logs for coredns [e7708377582a] ...
	I0213 15:16:40.355675    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7708377582a"
	I0213 15:16:40.367871    3510 logs.go:123] Gathering logs for kube-controller-manager [2cc58a5453f6] ...
	I0213 15:16:40.367883    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cc58a5453f6"
	I0213 15:16:40.385652    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:16:40.385662    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:16:40.409916    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:16:40.409924    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:16:40.421623    3510 logs.go:123] Gathering logs for coredns [c22661f1d8e2] ...
	I0213 15:16:40.421635    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c22661f1d8e2"
	I0213 15:16:40.434410    3510 logs.go:123] Gathering logs for kube-scheduler [6bd553391f1b] ...
	I0213 15:16:40.434423    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bd553391f1b"
	I0213 15:16:40.448971    3510 logs.go:123] Gathering logs for storage-provisioner [06e60b7523c4] ...
	I0213 15:16:40.448981    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06e60b7523c4"
	I0213 15:16:40.460448    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:16:40.460460    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 15:16:40.490869    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:16:40.490959    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:16:40.491930    3510 logs.go:123] Gathering logs for etcd [c7ccbfc9da3f] ...
	I0213 15:16:40.491933    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7ccbfc9da3f"
	I0213 15:16:40.505427    3510 logs.go:123] Gathering logs for coredns [1b82b884ae8b] ...
	I0213 15:16:40.505439    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b82b884ae8b"
	I0213 15:16:40.516416    3510 logs.go:123] Gathering logs for kube-proxy [1a94bf610354] ...
	I0213 15:16:40.516427    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a94bf610354"
	I0213 15:16:40.528184    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:16:40.528193    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 15:16:40.528218    3510 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0213 15:16:40.528222    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	  Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:16:40.528232    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	  Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:16:40.528235    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:16:40.528238    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:16:50.528479    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:16:55.529346    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:16:55.529828    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0213 15:16:55.569102    3510 logs.go:276] 1 containers: [5e51f2323c75]
	I0213 15:16:55.569229    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0213 15:16:55.590869    3510 logs.go:276] 1 containers: [c7ccbfc9da3f]
	I0213 15:16:55.590975    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0213 15:16:55.606991    3510 logs.go:276] 4 containers: [7c9bd59b46ca 1b82b884ae8b c22661f1d8e2 e7708377582a]
	I0213 15:16:55.607064    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0213 15:16:55.619134    3510 logs.go:276] 1 containers: [6bd553391f1b]
	I0213 15:16:55.619197    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0213 15:16:55.629818    3510 logs.go:276] 1 containers: [1a94bf610354]
	I0213 15:16:55.629888    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0213 15:16:55.640723    3510 logs.go:276] 1 containers: [2cc58a5453f6]
	I0213 15:16:55.640794    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0213 15:16:55.651603    3510 logs.go:276] 0 containers: []
	W0213 15:16:55.651613    3510 logs.go:278] No container was found matching "kindnet"
	I0213 15:16:55.651670    3510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0213 15:16:55.662361    3510 logs.go:276] 1 containers: [06e60b7523c4]
	I0213 15:16:55.662378    3510 logs.go:123] Gathering logs for coredns [c22661f1d8e2] ...
	I0213 15:16:55.662383    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c22661f1d8e2"
	I0213 15:16:55.674836    3510 logs.go:123] Gathering logs for coredns [e7708377582a] ...
	I0213 15:16:55.674846    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7708377582a"
	I0213 15:16:55.686698    3510 logs.go:123] Gathering logs for container status ...
	I0213 15:16:55.686713    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 15:16:55.699141    3510 logs.go:123] Gathering logs for coredns [7c9bd59b46ca] ...
	I0213 15:16:55.699155    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9bd59b46ca"
	I0213 15:16:55.710736    3510 logs.go:123] Gathering logs for kube-controller-manager [2cc58a5453f6] ...
	I0213 15:16:55.710750    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cc58a5453f6"
	I0213 15:16:55.728787    3510 logs.go:123] Gathering logs for Docker ...
	I0213 15:16:55.728797    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0213 15:16:55.751167    3510 logs.go:123] Gathering logs for kube-apiserver [5e51f2323c75] ...
	I0213 15:16:55.751174    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e51f2323c75"
	I0213 15:16:55.765295    3510 logs.go:123] Gathering logs for etcd [c7ccbfc9da3f] ...
	I0213 15:16:55.765305    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7ccbfc9da3f"
	I0213 15:16:55.779473    3510 logs.go:123] Gathering logs for describe nodes ...
	I0213 15:16:55.779483    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 15:16:55.813005    3510 logs.go:123] Gathering logs for coredns [1b82b884ae8b] ...
	I0213 15:16:55.813016    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b82b884ae8b"
	I0213 15:16:55.824968    3510 logs.go:123] Gathering logs for kube-scheduler [6bd553391f1b] ...
	I0213 15:16:55.824979    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bd553391f1b"
	I0213 15:16:55.839894    3510 logs.go:123] Gathering logs for kube-proxy [1a94bf610354] ...
	I0213 15:16:55.839908    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a94bf610354"
	I0213 15:16:55.851691    3510 logs.go:123] Gathering logs for storage-provisioner [06e60b7523c4] ...
	I0213 15:16:55.851701    3510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06e60b7523c4"
	I0213 15:16:55.863223    3510 logs.go:123] Gathering logs for kubelet ...
	I0213 15:16:55.863233    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 15:16:55.895906    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:16:55.895999    3510 logs.go:138] Found kubelet problem: Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:16:55.896971    3510 logs.go:123] Gathering logs for dmesg ...
	I0213 15:16:55.896975    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 15:16:55.900909    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:16:55.900919    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 15:16:55.900942    3510 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0213 15:16:55.900951    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	  Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: W0213 23:12:49.492625   10361 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	W0213 15:16:55.900954    3510 out.go:239]   Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	  Feb 13 23:12:49 stopped-upgrade-809000 kubelet[10361]: E0213 23:12:49.492653   10361 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-809000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-809000' and this object
	I0213 15:16:55.900957    3510 out.go:304] Setting ErrFile to fd 2...
	I0213 15:16:55.900960    3510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:17:05.905001    3510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0213 15:17:10.907710    3510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 15:17:10.915010    3510 out.go:177] 
	W0213 15:17:10.918997    3510 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0213 15:17:10.919026    3510 out.go:239] * 
	* 
	W0213 15:17:10.921335    3510 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:17:10.933001    3510 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-809000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (616.43s)

                                                
                                    
x
+
TestPause/serial/Start (10.09s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-927000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-927000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.034982041s)

                                                
                                                
-- stdout --
	* [pause-927000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-927000 in cluster pause-927000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-927000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-927000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-927000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-927000 -n pause-927000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-927000 -n pause-927000: exit status 7 (51.401167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-927000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-504000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-504000 --driver=qemu2 : exit status 80 (9.815708209s)

                                                
                                                
-- stdout --
	* [NoKubernetes-504000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-504000 in cluster NoKubernetes-504000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-504000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-504000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-504000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-504000 -n NoKubernetes-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-504000 -n NoKubernetes-504000: exit status 7 (56.982291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-504000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-504000 --no-kubernetes --driver=qemu2 : exit status 80 (5.831930292s)

                                                
                                                
-- stdout --
	* [NoKubernetes-504000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-504000
	* Restarting existing qemu2 VM for "NoKubernetes-504000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-504000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-504000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-504000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-504000 -n NoKubernetes-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-504000 -n NoKubernetes-504000: exit status 7 (35.307792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-504000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-504000 --no-kubernetes --driver=qemu2 : exit status 80 (5.845356667s)

                                                
                                                
-- stdout --
	* [NoKubernetes-504000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-504000
	* Restarting existing qemu2 VM for "NoKubernetes-504000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-504000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-504000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-504000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-504000 -n NoKubernetes-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-504000 -n NoKubernetes-504000: exit status 7 (64.607ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-504000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-504000 --driver=qemu2 : exit status 80 (5.83654425s)

                                                
                                                
-- stdout --
	* [NoKubernetes-504000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-504000
	* Restarting existing qemu2 VM for "NoKubernetes-504000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-504000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-504000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-504000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-504000 -n NoKubernetes-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-504000 -n NoKubernetes-504000: exit status 7 (37.164125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-891000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-891000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.720244583s)

                                                
                                                
-- stdout --
	* [auto-891000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-891000 in cluster auto-891000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-891000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:15:37.862208    3774 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:15:37.862360    3774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:15:37.862363    3774 out.go:304] Setting ErrFile to fd 2...
	I0213 15:15:37.862366    3774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:15:37.862503    3774 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:15:37.863569    3774 out.go:298] Setting JSON to false
	I0213 15:15:37.879782    3774 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2559,"bootTime":1707863578,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:15:37.879843    3774 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:15:37.885226    3774 out.go:177] * [auto-891000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:15:37.892160    3774 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:15:37.896217    3774 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:15:37.892254    3774 notify.go:220] Checking for updates...
	I0213 15:15:37.899197    3774 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:15:37.902231    3774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:15:37.905324    3774 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:15:37.908275    3774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:15:37.911631    3774 config.go:182] Loaded profile config "multinode-078000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:15:37.911692    3774 config.go:182] Loaded profile config "stopped-upgrade-809000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0213 15:15:37.911737    3774 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:15:37.916259    3774 out.go:177] * Using the qemu2 driver based on user configuration
	I0213 15:15:37.923238    3774 start.go:298] selected driver: qemu2
	I0213 15:15:37.923243    3774 start.go:902] validating driver "qemu2" against <nil>
	I0213 15:15:37.923250    3774 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:15:37.925583    3774 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:15:37.929229    3774 out.go:177] * Automatically selected the socket_vmnet network
	I0213 15:15:37.932212    3774 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:15:37.932246    3774 cni.go:84] Creating CNI manager for ""
	I0213 15:15:37.932252    3774 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:15:37.932256    3774 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 15:15:37.932263    3774 start_flags.go:321] config:
	{Name:auto-891000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs:}
	I0213 15:15:37.936584    3774 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:15:37.944214    3774 out.go:177] * Starting control plane node auto-891000 in cluster auto-891000
	I0213 15:15:37.948118    3774 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:15:37.948131    3774 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 15:15:37.948137    3774 cache.go:56] Caching tarball of preloaded images
	I0213 15:15:37.948193    3774 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:15:37.948198    3774 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 15:15:37.948254    3774 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/auto-891000/config.json ...
	I0213 15:15:37.948269    3774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/auto-891000/config.json: {Name:mk7db8dbe2ccd620317f35236d2e1351e792b5ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:15:37.948483    3774 start.go:365] acquiring machines lock for auto-891000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:15:37.948512    3774 start.go:369] acquired machines lock for "auto-891000" in 24.084µs
	I0213 15:15:37.948521    3774 start.go:93] Provisioning new machine with config: &{Name:auto-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:auto-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:15:37.948556    3774 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:15:37.957197    3774 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0213 15:15:37.971573    3774 start.go:159] libmachine.API.Create for "auto-891000" (driver="qemu2")
	I0213 15:15:37.971600    3774 client.go:168] LocalClient.Create starting
	I0213 15:15:37.971662    3774 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:15:37.971689    3774 main.go:141] libmachine: Decoding PEM data...
	I0213 15:15:37.971699    3774 main.go:141] libmachine: Parsing certificate...
	I0213 15:15:37.971735    3774 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:15:37.971758    3774 main.go:141] libmachine: Decoding PEM data...
	I0213 15:15:37.971766    3774 main.go:141] libmachine: Parsing certificate...
	I0213 15:15:37.972078    3774 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:15:38.093261    3774 main.go:141] libmachine: Creating SSH key...
	I0213 15:15:38.143030    3774 main.go:141] libmachine: Creating Disk image...
	I0213 15:15:38.143038    3774 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:15:38.143225    3774 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/auto-891000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/auto-891000/disk.qcow2
	I0213 15:15:38.155560    3774 main.go:141] libmachine: STDOUT: 
	I0213 15:15:38.155581    3774 main.go:141] libmachine: STDERR: 
	I0213 15:15:38.155635    3774 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/auto-891000/disk.qcow2 +20000M
	I0213 15:15:38.166597    3774 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:15:38.166612    3774 main.go:141] libmachine: STDERR: 
	I0213 15:15:38.166625    3774 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/auto-891000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/auto-891000/disk.qcow2
	I0213 15:15:38.166630    3774 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:15:38.166654    3774 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/auto-891000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/auto-891000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/auto-891000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:ad:1f:57:fe:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/auto-891000/disk.qcow2
	I0213 15:15:38.168302    3774 main.go:141] libmachine: STDOUT: 
	I0213 15:15:38.168318    3774 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:15:38.168335    3774 client.go:171] LocalClient.Create took 196.732208ms
	I0213 15:15:40.170729    3774 start.go:128] duration metric: createHost completed in 2.222196375s
	I0213 15:15:40.170817    3774 start.go:83] releasing machines lock for "auto-891000", held for 2.222342416s
	W0213 15:15:40.170867    3774 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:15:40.182001    3774 out.go:177] * Deleting "auto-891000" in qemu2 ...
	W0213 15:15:40.203772    3774 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:15:40.203816    3774 start.go:709] Will try again in 5 seconds ...
	I0213 15:15:45.205877    3774 start.go:365] acquiring machines lock for auto-891000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:15:45.206324    3774 start.go:369] acquired machines lock for "auto-891000" in 362.542µs
	I0213 15:15:45.206456    3774 start.go:93] Provisioning new machine with config: &{Name:auto-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:auto-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:15:45.206672    3774 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:15:45.212385    3774 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0213 15:15:45.257909    3774 start.go:159] libmachine.API.Create for "auto-891000" (driver="qemu2")
	I0213 15:15:45.257952    3774 client.go:168] LocalClient.Create starting
	I0213 15:15:45.258081    3774 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:15:45.258150    3774 main.go:141] libmachine: Decoding PEM data...
	I0213 15:15:45.258175    3774 main.go:141] libmachine: Parsing certificate...
	I0213 15:15:45.258241    3774 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:15:45.258282    3774 main.go:141] libmachine: Decoding PEM data...
	I0213 15:15:45.258294    3774 main.go:141] libmachine: Parsing certificate...
	I0213 15:15:45.258865    3774 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:15:45.390444    3774 main.go:141] libmachine: Creating SSH key...
	I0213 15:15:45.483494    3774 main.go:141] libmachine: Creating Disk image...
	I0213 15:15:45.483503    3774 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:15:45.483686    3774 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/auto-891000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/auto-891000/disk.qcow2
	I0213 15:15:45.496058    3774 main.go:141] libmachine: STDOUT: 
	I0213 15:15:45.496118    3774 main.go:141] libmachine: STDERR: 
	I0213 15:15:45.496185    3774 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/auto-891000/disk.qcow2 +20000M
	I0213 15:15:45.506944    3774 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:15:45.507008    3774 main.go:141] libmachine: STDERR: 
	I0213 15:15:45.507026    3774 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/auto-891000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/auto-891000/disk.qcow2
	I0213 15:15:45.507032    3774 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:15:45.507069    3774 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/auto-891000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/auto-891000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/auto-891000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:76:77:ef:cd:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/auto-891000/disk.qcow2
	I0213 15:15:45.508862    3774 main.go:141] libmachine: STDOUT: 
	I0213 15:15:45.508902    3774 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:15:45.508917    3774 client.go:171] LocalClient.Create took 250.965542ms
	I0213 15:15:47.511217    3774 start.go:128] duration metric: createHost completed in 2.304539375s
	I0213 15:15:47.511308    3774 start.go:83] releasing machines lock for "auto-891000", held for 2.305008958s
	W0213 15:15:47.511573    3774 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:15:47.523241    3774 out.go:177] 
	W0213 15:15:47.527414    3774 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:15:47.527437    3774 out.go:239] * 
	* 
	W0213 15:15:47.530291    3774 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:15:47.539347    3774 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-891000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-891000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.805280209s)

                                                
                                                
-- stdout --
	* [flannel-891000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-891000 in cluster flannel-891000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-891000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:15:49.852677    3884 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:15:49.852815    3884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:15:49.852818    3884 out.go:304] Setting ErrFile to fd 2...
	I0213 15:15:49.852821    3884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:15:49.852970    3884 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:15:49.853980    3884 out.go:298] Setting JSON to false
	I0213 15:15:49.870702    3884 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2571,"bootTime":1707863578,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:15:49.870776    3884 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:15:49.875602    3884 out.go:177] * [flannel-891000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:15:49.882477    3884 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:15:49.882547    3884 notify.go:220] Checking for updates...
	I0213 15:15:49.886592    3884 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:15:49.889597    3884 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:15:49.892512    3884 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:15:49.895588    3884 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:15:49.898581    3884 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:15:49.901877    3884 config.go:182] Loaded profile config "multinode-078000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:15:49.901942    3884 config.go:182] Loaded profile config "stopped-upgrade-809000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0213 15:15:49.901994    3884 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:15:49.906514    3884 out.go:177] * Using the qemu2 driver based on user configuration
	I0213 15:15:49.912552    3884 start.go:298] selected driver: qemu2
	I0213 15:15:49.912559    3884 start.go:902] validating driver "qemu2" against <nil>
	I0213 15:15:49.912565    3884 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:15:49.914944    3884 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:15:49.918547    3884 out.go:177] * Automatically selected the socket_vmnet network
	I0213 15:15:49.921624    3884 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:15:49.921674    3884 cni.go:84] Creating CNI manager for "flannel"
	I0213 15:15:49.921679    3884 start_flags.go:316] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0213 15:15:49.921691    3884 start_flags.go:321] config:
	{Name:flannel-891000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:flannel-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs:}
	I0213 15:15:49.926029    3884 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:15:49.933580    3884 out.go:177] * Starting control plane node flannel-891000 in cluster flannel-891000
	I0213 15:15:49.937555    3884 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:15:49.937570    3884 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 15:15:49.937579    3884 cache.go:56] Caching tarball of preloaded images
	I0213 15:15:49.937636    3884 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:15:49.937641    3884 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 15:15:49.937716    3884 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/flannel-891000/config.json ...
	I0213 15:15:49.937727    3884 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/flannel-891000/config.json: {Name:mk7cbe8cce15e8c93ae4619a29f447cc4ee86d30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:15:49.937929    3884 start.go:365] acquiring machines lock for flannel-891000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:15:49.937958    3884 start.go:369] acquired machines lock for "flannel-891000" in 23.25µs
	I0213 15:15:49.937969    3884 start.go:93] Provisioning new machine with config: &{Name:flannel-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:flannel-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:15:49.937998    3884 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:15:49.946584    3884 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0213 15:15:49.961867    3884 start.go:159] libmachine.API.Create for "flannel-891000" (driver="qemu2")
	I0213 15:15:49.961898    3884 client.go:168] LocalClient.Create starting
	I0213 15:15:49.961959    3884 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:15:49.961987    3884 main.go:141] libmachine: Decoding PEM data...
	I0213 15:15:49.961997    3884 main.go:141] libmachine: Parsing certificate...
	I0213 15:15:49.962038    3884 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:15:49.962061    3884 main.go:141] libmachine: Decoding PEM data...
	I0213 15:15:49.962078    3884 main.go:141] libmachine: Parsing certificate...
	I0213 15:15:49.962433    3884 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:15:50.085561    3884 main.go:141] libmachine: Creating SSH key...
	I0213 15:15:50.238183    3884 main.go:141] libmachine: Creating Disk image...
	I0213 15:15:50.238193    3884 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:15:50.238410    3884 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/flannel-891000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/flannel-891000/disk.qcow2
	I0213 15:15:50.251380    3884 main.go:141] libmachine: STDOUT: 
	I0213 15:15:50.251405    3884 main.go:141] libmachine: STDERR: 
	I0213 15:15:50.251464    3884 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/flannel-891000/disk.qcow2 +20000M
	I0213 15:15:50.262525    3884 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:15:50.262543    3884 main.go:141] libmachine: STDERR: 
	I0213 15:15:50.262565    3884 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/flannel-891000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/flannel-891000/disk.qcow2
	I0213 15:15:50.262569    3884 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:15:50.262598    3884 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/flannel-891000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/flannel-891000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/flannel-891000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:25:2f:47:9e:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/flannel-891000/disk.qcow2
	I0213 15:15:50.264405    3884 main.go:141] libmachine: STDOUT: 
	I0213 15:15:50.264425    3884 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:15:50.264453    3884 client.go:171] LocalClient.Create took 302.554958ms
	I0213 15:15:52.266591    3884 start.go:128] duration metric: createHost completed in 2.328610166s
	I0213 15:15:52.266695    3884 start.go:83] releasing machines lock for "flannel-891000", held for 2.32875375s
	W0213 15:15:52.266768    3884 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:15:52.279413    3884 out.go:177] * Deleting "flannel-891000" in qemu2 ...
	W0213 15:15:52.297835    3884 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:15:52.297858    3884 start.go:709] Will try again in 5 seconds ...
	I0213 15:15:57.300039    3884 start.go:365] acquiring machines lock for flannel-891000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:15:57.300598    3884 start.go:369] acquired machines lock for "flannel-891000" in 442.25µs
	I0213 15:15:57.300744    3884 start.go:93] Provisioning new machine with config: &{Name:flannel-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:flannel-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:15:57.301071    3884 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:15:57.310650    3884 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0213 15:15:57.361334    3884 start.go:159] libmachine.API.Create for "flannel-891000" (driver="qemu2")
	I0213 15:15:57.361379    3884 client.go:168] LocalClient.Create starting
	I0213 15:15:57.361556    3884 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:15:57.361628    3884 main.go:141] libmachine: Decoding PEM data...
	I0213 15:15:57.361643    3884 main.go:141] libmachine: Parsing certificate...
	I0213 15:15:57.361707    3884 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:15:57.361749    3884 main.go:141] libmachine: Decoding PEM data...
	I0213 15:15:57.361764    3884 main.go:141] libmachine: Parsing certificate...
	I0213 15:15:57.362304    3884 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:15:57.495348    3884 main.go:141] libmachine: Creating SSH key...
	I0213 15:15:57.563119    3884 main.go:141] libmachine: Creating Disk image...
	I0213 15:15:57.563129    3884 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:15:57.563306    3884 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/flannel-891000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/flannel-891000/disk.qcow2
	I0213 15:15:57.575525    3884 main.go:141] libmachine: STDOUT: 
	I0213 15:15:57.575551    3884 main.go:141] libmachine: STDERR: 
	I0213 15:15:57.575602    3884 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/flannel-891000/disk.qcow2 +20000M
	I0213 15:15:57.586361    3884 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:15:57.586397    3884 main.go:141] libmachine: STDERR: 
	I0213 15:15:57.586412    3884 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/flannel-891000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/flannel-891000/disk.qcow2
	I0213 15:15:57.586419    3884 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:15:57.586459    3884 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/flannel-891000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/flannel-891000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/flannel-891000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:e2:d0:38:f7:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/flannel-891000/disk.qcow2
	I0213 15:15:57.588230    3884 main.go:141] libmachine: STDOUT: 
	I0213 15:15:57.588245    3884 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:15:57.588261    3884 client.go:171] LocalClient.Create took 226.881667ms
	I0213 15:15:59.590422    3884 start.go:128] duration metric: createHost completed in 2.289345917s
	I0213 15:15:59.590536    3884 start.go:83] releasing machines lock for "flannel-891000", held for 2.289963083s
	W0213 15:15:59.590970    3884 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:15:59.599748    3884 out.go:177] 
	W0213 15:15:59.603608    3884 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:15:59.603635    3884 out.go:239] * 
	* 
	W0213 15:15:59.606225    3884 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:15:59.612598    3884 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-891000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
E0213 15:16:08.456334    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-891000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.878259166s)

                                                
                                                
-- stdout --
	* [kindnet-891000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-891000 in cluster kindnet-891000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-891000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:16:02.049706    4004 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:16:02.049865    4004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:16:02.049868    4004 out.go:304] Setting ErrFile to fd 2...
	I0213 15:16:02.049871    4004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:16:02.049986    4004 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:16:02.051079    4004 out.go:298] Setting JSON to false
	I0213 15:16:02.067521    4004 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2584,"bootTime":1707863578,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:16:02.067603    4004 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:16:02.071760    4004 out.go:177] * [kindnet-891000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:16:02.074763    4004 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:16:02.078587    4004 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:16:02.074847    4004 notify.go:220] Checking for updates...
	I0213 15:16:02.085675    4004 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:16:02.086836    4004 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:16:02.089713    4004 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:16:02.092731    4004 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:16:02.096135    4004 config.go:182] Loaded profile config "multinode-078000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:16:02.096213    4004 config.go:182] Loaded profile config "stopped-upgrade-809000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0213 15:16:02.096263    4004 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:16:02.100708    4004 out.go:177] * Using the qemu2 driver based on user configuration
	I0213 15:16:02.107692    4004 start.go:298] selected driver: qemu2
	I0213 15:16:02.107696    4004 start.go:902] validating driver "qemu2" against <nil>
	I0213 15:16:02.107703    4004 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:16:02.109922    4004 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:16:02.112691    4004 out.go:177] * Automatically selected the socket_vmnet network
	I0213 15:16:02.115795    4004 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:16:02.115853    4004 cni.go:84] Creating CNI manager for "kindnet"
	I0213 15:16:02.115859    4004 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0213 15:16:02.115868    4004 start_flags.go:321] config:
	{Name:kindnet-891000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs:}
	I0213 15:16:02.120309    4004 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:16:02.127625    4004 out.go:177] * Starting control plane node kindnet-891000 in cluster kindnet-891000
	I0213 15:16:02.131697    4004 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:16:02.131711    4004 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 15:16:02.131716    4004 cache.go:56] Caching tarball of preloaded images
	I0213 15:16:02.131766    4004 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:16:02.131772    4004 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 15:16:02.131830    4004 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/kindnet-891000/config.json ...
	I0213 15:16:02.131841    4004 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/kindnet-891000/config.json: {Name:mka86575fe2539cfe225c9779c2a739c49053bc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:16:02.132052    4004 start.go:365] acquiring machines lock for kindnet-891000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:16:02.132083    4004 start.go:369] acquired machines lock for "kindnet-891000" in 25.334µs
	I0213 15:16:02.132094    4004 start.go:93] Provisioning new machine with config: &{Name:kindnet-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:16:02.132132    4004 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:16:02.140655    4004 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0213 15:16:02.157832    4004 start.go:159] libmachine.API.Create for "kindnet-891000" (driver="qemu2")
	I0213 15:16:02.157861    4004 client.go:168] LocalClient.Create starting
	I0213 15:16:02.157931    4004 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:16:02.157964    4004 main.go:141] libmachine: Decoding PEM data...
	I0213 15:16:02.157979    4004 main.go:141] libmachine: Parsing certificate...
	I0213 15:16:02.158020    4004 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:16:02.158042    4004 main.go:141] libmachine: Decoding PEM data...
	I0213 15:16:02.158050    4004 main.go:141] libmachine: Parsing certificate...
	I0213 15:16:02.158423    4004 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:16:02.281481    4004 main.go:141] libmachine: Creating SSH key...
	I0213 15:16:02.503255    4004 main.go:141] libmachine: Creating Disk image...
	I0213 15:16:02.503270    4004 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:16:02.503509    4004 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kindnet-891000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kindnet-891000/disk.qcow2
	I0213 15:16:02.516043    4004 main.go:141] libmachine: STDOUT: 
	I0213 15:16:02.516067    4004 main.go:141] libmachine: STDERR: 
	I0213 15:16:02.516128    4004 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kindnet-891000/disk.qcow2 +20000M
	I0213 15:16:02.526783    4004 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:16:02.526801    4004 main.go:141] libmachine: STDERR: 
	I0213 15:16:02.526821    4004 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kindnet-891000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kindnet-891000/disk.qcow2
	I0213 15:16:02.526826    4004 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:16:02.526872    4004 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kindnet-891000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/kindnet-891000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kindnet-891000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:c5:9f:4b:87:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kindnet-891000/disk.qcow2
	I0213 15:16:02.528576    4004 main.go:141] libmachine: STDOUT: 
	I0213 15:16:02.528598    4004 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:16:02.528621    4004 client.go:171] LocalClient.Create took 370.763291ms
	I0213 15:16:04.530920    4004 start.go:128] duration metric: createHost completed in 2.398799041s
	I0213 15:16:04.531015    4004 start.go:83] releasing machines lock for "kindnet-891000", held for 2.398973667s
	W0213 15:16:04.531083    4004 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:16:04.543014    4004 out.go:177] * Deleting "kindnet-891000" in qemu2 ...
	W0213 15:16:04.563890    4004 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:16:04.563918    4004 start.go:709] Will try again in 5 seconds ...
	I0213 15:16:09.564493    4004 start.go:365] acquiring machines lock for kindnet-891000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:16:09.564587    4004 start.go:369] acquired machines lock for "kindnet-891000" in 74.917µs
	I0213 15:16:09.564612    4004 start.go:93] Provisioning new machine with config: &{Name:kindnet-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:16:09.564658    4004 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:16:09.572855    4004 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0213 15:16:09.588678    4004 start.go:159] libmachine.API.Create for "kindnet-891000" (driver="qemu2")
	I0213 15:16:09.588711    4004 client.go:168] LocalClient.Create starting
	I0213 15:16:09.588778    4004 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:16:09.588815    4004 main.go:141] libmachine: Decoding PEM data...
	I0213 15:16:09.588828    4004 main.go:141] libmachine: Parsing certificate...
	I0213 15:16:09.588868    4004 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:16:09.588889    4004 main.go:141] libmachine: Decoding PEM data...
	I0213 15:16:09.588897    4004 main.go:141] libmachine: Parsing certificate...
	I0213 15:16:09.589215    4004 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:16:09.713718    4004 main.go:141] libmachine: Creating SSH key...
	I0213 15:16:09.832058    4004 main.go:141] libmachine: Creating Disk image...
	I0213 15:16:09.832067    4004 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:16:09.832291    4004 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kindnet-891000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kindnet-891000/disk.qcow2
	I0213 15:16:09.844984    4004 main.go:141] libmachine: STDOUT: 
	I0213 15:16:09.845008    4004 main.go:141] libmachine: STDERR: 
	I0213 15:16:09.845084    4004 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kindnet-891000/disk.qcow2 +20000M
	I0213 15:16:09.856587    4004 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:16:09.856612    4004 main.go:141] libmachine: STDERR: 
	I0213 15:16:09.856627    4004 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kindnet-891000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kindnet-891000/disk.qcow2
	I0213 15:16:09.856631    4004 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:16:09.856666    4004 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kindnet-891000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/kindnet-891000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kindnet-891000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:3b:58:d3:c5:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kindnet-891000/disk.qcow2
	I0213 15:16:09.858460    4004 main.go:141] libmachine: STDOUT: 
	I0213 15:16:09.858477    4004 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:16:09.858489    4004 client.go:171] LocalClient.Create took 269.780167ms
	I0213 15:16:11.860768    4004 start.go:128] duration metric: createHost completed in 2.296093125s
	I0213 15:16:11.860855    4004 start.go:83] releasing machines lock for "kindnet-891000", held for 2.296306583s
	W0213 15:16:11.861241    4004 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:16:11.867197    4004 out.go:177] 
	W0213 15:16:11.875252    4004 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:16:11.875292    4004 out.go:239] * 
	* 
	W0213 15:16:11.876919    4004 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:16:11.885129    4004 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-891000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-891000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.9417305s)

                                                
                                                
-- stdout --
	* [enable-default-cni-891000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-891000 in cluster enable-default-cni-891000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-891000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:16:14.269096    4121 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:16:14.269231    4121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:16:14.269234    4121 out.go:304] Setting ErrFile to fd 2...
	I0213 15:16:14.269237    4121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:16:14.269368    4121 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:16:14.270423    4121 out.go:298] Setting JSON to false
	I0213 15:16:14.286802    4121 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2596,"bootTime":1707863578,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:16:14.286887    4121 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:16:14.291243    4121 out.go:177] * [enable-default-cni-891000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:16:14.295176    4121 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:16:14.299167    4121 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:16:14.295241    4121 notify.go:220] Checking for updates...
	I0213 15:16:14.306191    4121 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:16:14.309187    4121 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:16:14.312138    4121 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:16:14.315162    4121 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:16:14.318400    4121 config.go:182] Loaded profile config "multinode-078000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:16:14.318470    4121 config.go:182] Loaded profile config "stopped-upgrade-809000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0213 15:16:14.318515    4121 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:16:14.323171    4121 out.go:177] * Using the qemu2 driver based on user configuration
	I0213 15:16:14.329057    4121 start.go:298] selected driver: qemu2
	I0213 15:16:14.329062    4121 start.go:902] validating driver "qemu2" against <nil>
	I0213 15:16:14.329067    4121 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:16:14.331286    4121 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:16:14.334163    4121 out.go:177] * Automatically selected the socket_vmnet network
	E0213 15:16:14.337263    4121 start_flags.go:463] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0213 15:16:14.337274    4121 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:16:14.337332    4121 cni.go:84] Creating CNI manager for "bridge"
	I0213 15:16:14.337337    4121 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 15:16:14.337343    4121 start_flags.go:321] config:
	{Name:enable-default-cni-891000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:16:14.341934    4121 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:16:14.349207    4121 out.go:177] * Starting control plane node enable-default-cni-891000 in cluster enable-default-cni-891000
	I0213 15:16:14.353197    4121 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:16:14.353212    4121 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 15:16:14.353226    4121 cache.go:56] Caching tarball of preloaded images
	I0213 15:16:14.353296    4121 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:16:14.353301    4121 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 15:16:14.353379    4121 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/enable-default-cni-891000/config.json ...
	I0213 15:16:14.353396    4121 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/enable-default-cni-891000/config.json: {Name:mk5f087fd3a1d894eaad0d7b0ac303bc893df3bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:16:14.353607    4121 start.go:365] acquiring machines lock for enable-default-cni-891000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:16:14.353641    4121 start.go:369] acquired machines lock for "enable-default-cni-891000" in 25.292µs
	I0213 15:16:14.353653    4121 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:16:14.353693    4121 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:16:14.362158    4121 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0213 15:16:14.379111    4121 start.go:159] libmachine.API.Create for "enable-default-cni-891000" (driver="qemu2")
	I0213 15:16:14.379144    4121 client.go:168] LocalClient.Create starting
	I0213 15:16:14.379219    4121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:16:14.379251    4121 main.go:141] libmachine: Decoding PEM data...
	I0213 15:16:14.379259    4121 main.go:141] libmachine: Parsing certificate...
	I0213 15:16:14.379304    4121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:16:14.379327    4121 main.go:141] libmachine: Decoding PEM data...
	I0213 15:16:14.379335    4121 main.go:141] libmachine: Parsing certificate...
	I0213 15:16:14.379708    4121 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:16:14.501023    4121 main.go:141] libmachine: Creating SSH key...
	I0213 15:16:14.675254    4121 main.go:141] libmachine: Creating Disk image...
	I0213 15:16:14.675264    4121 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:16:14.675497    4121 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/enable-default-cni-891000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/enable-default-cni-891000/disk.qcow2
	I0213 15:16:14.688673    4121 main.go:141] libmachine: STDOUT: 
	I0213 15:16:14.688693    4121 main.go:141] libmachine: STDERR: 
	I0213 15:16:14.688764    4121 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/enable-default-cni-891000/disk.qcow2 +20000M
	I0213 15:16:14.699714    4121 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:16:14.699730    4121 main.go:141] libmachine: STDERR: 
	I0213 15:16:14.699756    4121 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/enable-default-cni-891000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/enable-default-cni-891000/disk.qcow2
	I0213 15:16:14.699762    4121 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:16:14.699793    4121 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/enable-default-cni-891000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/enable-default-cni-891000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/enable-default-cni-891000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:97:22:58:1c:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/enable-default-cni-891000/disk.qcow2
	I0213 15:16:14.701499    4121 main.go:141] libmachine: STDOUT: 
	I0213 15:16:14.701514    4121 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:16:14.701533    4121 client.go:171] LocalClient.Create took 322.390875ms
	I0213 15:16:16.702949    4121 start.go:128] duration metric: createHost completed in 2.349284292s
	I0213 15:16:16.703009    4121 start.go:83] releasing machines lock for "enable-default-cni-891000", held for 2.349410166s
	W0213 15:16:16.703047    4121 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:16:16.714477    4121 out.go:177] * Deleting "enable-default-cni-891000" in qemu2 ...
	W0213 15:16:16.731490    4121 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:16:16.731512    4121 start.go:709] Will try again in 5 seconds ...
	I0213 15:16:21.733536    4121 start.go:365] acquiring machines lock for enable-default-cni-891000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:16:21.733808    4121 start.go:369] acquired machines lock for "enable-default-cni-891000" in 227.708µs
	I0213 15:16:21.733864    4121 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:16:21.733983    4121 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:16:21.743255    4121 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0213 15:16:21.779432    4121 start.go:159] libmachine.API.Create for "enable-default-cni-891000" (driver="qemu2")
	I0213 15:16:21.779481    4121 client.go:168] LocalClient.Create starting
	I0213 15:16:21.779595    4121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:16:21.779656    4121 main.go:141] libmachine: Decoding PEM data...
	I0213 15:16:21.779676    4121 main.go:141] libmachine: Parsing certificate...
	I0213 15:16:21.779734    4121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:16:21.779793    4121 main.go:141] libmachine: Decoding PEM data...
	I0213 15:16:21.779804    4121 main.go:141] libmachine: Parsing certificate...
	I0213 15:16:21.780302    4121 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:16:21.911026    4121 main.go:141] libmachine: Creating SSH key...
	I0213 15:16:22.112376    4121 main.go:141] libmachine: Creating Disk image...
	I0213 15:16:22.112386    4121 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:16:22.112653    4121 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/enable-default-cni-891000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/enable-default-cni-891000/disk.qcow2
	I0213 15:16:22.125589    4121 main.go:141] libmachine: STDOUT: 
	I0213 15:16:22.125622    4121 main.go:141] libmachine: STDERR: 
	I0213 15:16:22.125676    4121 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/enable-default-cni-891000/disk.qcow2 +20000M
	I0213 15:16:22.136355    4121 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:16:22.136373    4121 main.go:141] libmachine: STDERR: 
	I0213 15:16:22.136387    4121 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/enable-default-cni-891000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/enable-default-cni-891000/disk.qcow2
	I0213 15:16:22.136392    4121 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:16:22.136432    4121 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/enable-default-cni-891000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/enable-default-cni-891000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/enable-default-cni-891000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:7a:d0:5b:80:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/enable-default-cni-891000/disk.qcow2
	I0213 15:16:22.138117    4121 main.go:141] libmachine: STDOUT: 
	I0213 15:16:22.138134    4121 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:16:22.138146    4121 client.go:171] LocalClient.Create took 358.666167ms
	I0213 15:16:24.140415    4121 start.go:128] duration metric: createHost completed in 2.406445917s
	I0213 15:16:24.140491    4121 start.go:83] releasing machines lock for "enable-default-cni-891000", held for 2.406713834s
	W0213 15:16:24.140814    4121 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:16:24.150593    4121 out.go:177] 
	W0213 15:16:24.154652    4121 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:16:24.154727    4121 out.go:239] * 
	* 
	W0213 15:16:24.157531    4121 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:16:24.166499    4121 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-891000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-891000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.678305084s)

                                                
                                                
-- stdout --
	* [bridge-891000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-891000 in cluster bridge-891000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-891000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:16:26.499706    4231 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:16:26.499835    4231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:16:26.499839    4231 out.go:304] Setting ErrFile to fd 2...
	I0213 15:16:26.499841    4231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:16:26.499969    4231 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:16:26.501056    4231 out.go:298] Setting JSON to false
	I0213 15:16:26.517406    4231 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2608,"bootTime":1707863578,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:16:26.517464    4231 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:16:26.521762    4231 out.go:177] * [bridge-891000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:16:26.528584    4231 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:16:26.528615    4231 notify.go:220] Checking for updates...
	I0213 15:16:26.532692    4231 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:16:26.538638    4231 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:16:26.542697    4231 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:16:26.545724    4231 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:16:26.548670    4231 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:16:26.551988    4231 config.go:182] Loaded profile config "multinode-078000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:16:26.552052    4231 config.go:182] Loaded profile config "stopped-upgrade-809000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0213 15:16:26.552105    4231 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:16:26.556659    4231 out.go:177] * Using the qemu2 driver based on user configuration
	I0213 15:16:26.563647    4231 start.go:298] selected driver: qemu2
	I0213 15:16:26.563653    4231 start.go:902] validating driver "qemu2" against <nil>
	I0213 15:16:26.563659    4231 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:16:26.565986    4231 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:16:26.568702    4231 out.go:177] * Automatically selected the socket_vmnet network
	I0213 15:16:26.570159    4231 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:16:26.570201    4231 cni.go:84] Creating CNI manager for "bridge"
	I0213 15:16:26.570205    4231 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 15:16:26.570210    4231 start_flags.go:321] config:
	{Name:bridge-891000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:bridge-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs:}
	I0213 15:16:26.574584    4231 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:16:26.581641    4231 out.go:177] * Starting control plane node bridge-891000 in cluster bridge-891000
	I0213 15:16:26.585688    4231 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:16:26.585708    4231 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 15:16:26.585718    4231 cache.go:56] Caching tarball of preloaded images
	I0213 15:16:26.585778    4231 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:16:26.585783    4231 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 15:16:26.585862    4231 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/bridge-891000/config.json ...
	I0213 15:16:26.585873    4231 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/bridge-891000/config.json: {Name:mkf72878c686c1b809b5842d02bc552be7dbade8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:16:26.586091    4231 start.go:365] acquiring machines lock for bridge-891000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:16:26.586118    4231 start.go:369] acquired machines lock for "bridge-891000" in 22.375µs
	I0213 15:16:26.586130    4231 start.go:93] Provisioning new machine with config: &{Name:bridge-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:bridge-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:16:26.586164    4231 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:16:26.594558    4231 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0213 15:16:26.608684    4231 start.go:159] libmachine.API.Create for "bridge-891000" (driver="qemu2")
	I0213 15:16:26.608712    4231 client.go:168] LocalClient.Create starting
	I0213 15:16:26.608781    4231 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:16:26.608820    4231 main.go:141] libmachine: Decoding PEM data...
	I0213 15:16:26.608830    4231 main.go:141] libmachine: Parsing certificate...
	I0213 15:16:26.608870    4231 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:16:26.608892    4231 main.go:141] libmachine: Decoding PEM data...
	I0213 15:16:26.608900    4231 main.go:141] libmachine: Parsing certificate...
	I0213 15:16:26.609266    4231 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:16:26.732322    4231 main.go:141] libmachine: Creating SSH key...
	I0213 15:16:26.782634    4231 main.go:141] libmachine: Creating Disk image...
	I0213 15:16:26.782643    4231 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:16:26.782854    4231 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/bridge-891000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/bridge-891000/disk.qcow2
	I0213 15:16:26.795216    4231 main.go:141] libmachine: STDOUT: 
	I0213 15:16:26.795235    4231 main.go:141] libmachine: STDERR: 
	I0213 15:16:26.795292    4231 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/bridge-891000/disk.qcow2 +20000M
	I0213 15:16:26.806075    4231 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:16:26.806086    4231 main.go:141] libmachine: STDERR: 
	I0213 15:16:26.806110    4231 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/bridge-891000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/bridge-891000/disk.qcow2
	I0213 15:16:26.806113    4231 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:16:26.806154    4231 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/bridge-891000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/bridge-891000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/bridge-891000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:90:20:03:39:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/bridge-891000/disk.qcow2
	I0213 15:16:26.807880    4231 main.go:141] libmachine: STDOUT: 
	I0213 15:16:26.807895    4231 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:16:26.807923    4231 client.go:171] LocalClient.Create took 199.208916ms
	I0213 15:16:28.810143    4231 start.go:128] duration metric: createHost completed in 2.223985625s
	I0213 15:16:28.810225    4231 start.go:83] releasing machines lock for "bridge-891000", held for 2.224147917s
	W0213 15:16:28.810269    4231 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:16:28.819973    4231 out.go:177] * Deleting "bridge-891000" in qemu2 ...
	W0213 15:16:28.840270    4231 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:16:28.840304    4231 start.go:709] Will try again in 5 seconds ...
	I0213 15:16:33.842471    4231 start.go:365] acquiring machines lock for bridge-891000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:16:33.842991    4231 start.go:369] acquired machines lock for "bridge-891000" in 389.958µs
	I0213 15:16:33.843145    4231 start.go:93] Provisioning new machine with config: &{Name:bridge-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:bridge-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:16:33.843449    4231 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:16:33.852038    4231 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0213 15:16:33.899507    4231 start.go:159] libmachine.API.Create for "bridge-891000" (driver="qemu2")
	I0213 15:16:33.899555    4231 client.go:168] LocalClient.Create starting
	I0213 15:16:33.899659    4231 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:16:33.899773    4231 main.go:141] libmachine: Decoding PEM data...
	I0213 15:16:33.899795    4231 main.go:141] libmachine: Parsing certificate...
	I0213 15:16:33.899858    4231 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:16:33.899899    4231 main.go:141] libmachine: Decoding PEM data...
	I0213 15:16:33.899923    4231 main.go:141] libmachine: Parsing certificate...
	I0213 15:16:33.900459    4231 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:16:34.033424    4231 main.go:141] libmachine: Creating SSH key...
	I0213 15:16:34.086601    4231 main.go:141] libmachine: Creating Disk image...
	I0213 15:16:34.086607    4231 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:16:34.086828    4231 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/bridge-891000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/bridge-891000/disk.qcow2
	I0213 15:16:34.099054    4231 main.go:141] libmachine: STDOUT: 
	I0213 15:16:34.099076    4231 main.go:141] libmachine: STDERR: 
	I0213 15:16:34.099131    4231 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/bridge-891000/disk.qcow2 +20000M
	I0213 15:16:34.109873    4231 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:16:34.109890    4231 main.go:141] libmachine: STDERR: 
	I0213 15:16:34.109909    4231 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/bridge-891000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/bridge-891000/disk.qcow2
	I0213 15:16:34.109916    4231 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:16:34.109954    4231 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/bridge-891000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/bridge-891000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/bridge-891000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:59:cb:e0:11:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/bridge-891000/disk.qcow2
	I0213 15:16:34.111737    4231 main.go:141] libmachine: STDOUT: 
	I0213 15:16:34.111755    4231 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:16:34.111769    4231 client.go:171] LocalClient.Create took 212.213209ms
	I0213 15:16:36.113807    4231 start.go:128] duration metric: createHost completed in 2.27039075s
	I0213 15:16:36.113833    4231 start.go:83] releasing machines lock for "bridge-891000", held for 2.2708705s
	W0213 15:16:36.113995    4231 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:16:36.123421    4231 out.go:177] 
	W0213 15:16:36.127410    4231 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:16:36.127440    4231 out.go:239] * 
	* 
	W0213 15:16:36.128787    4231 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:16:36.138183    4231 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-891000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-891000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.690145167s)

                                                
                                                
-- stdout --
	* [kubenet-891000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-891000 in cluster kubenet-891000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-891000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:16:38.433548    4341 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:16:38.433671    4341 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:16:38.433674    4341 out.go:304] Setting ErrFile to fd 2...
	I0213 15:16:38.433677    4341 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:16:38.433800    4341 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:16:38.434833    4341 out.go:298] Setting JSON to false
	I0213 15:16:38.451456    4341 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2620,"bootTime":1707863578,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:16:38.451543    4341 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:16:38.457389    4341 out.go:177] * [kubenet-891000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:16:38.464435    4341 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:16:38.468448    4341 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:16:38.464469    4341 notify.go:220] Checking for updates...
	I0213 15:16:38.474504    4341 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:16:38.477499    4341 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:16:38.480442    4341 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:16:38.483488    4341 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:16:38.486765    4341 config.go:182] Loaded profile config "multinode-078000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:16:38.486830    4341 config.go:182] Loaded profile config "stopped-upgrade-809000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0213 15:16:38.486879    4341 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:16:38.494406    4341 out.go:177] * Using the qemu2 driver based on user configuration
	I0213 15:16:38.501443    4341 start.go:298] selected driver: qemu2
	I0213 15:16:38.501449    4341 start.go:902] validating driver "qemu2" against <nil>
	I0213 15:16:38.501456    4341 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:16:38.503732    4341 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:16:38.507476    4341 out.go:177] * Automatically selected the socket_vmnet network
	I0213 15:16:38.510512    4341 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:16:38.510554    4341 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0213 15:16:38.510559    4341 start_flags.go:321] config:
	{Name:kubenet-891000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs:}
	I0213 15:16:38.514942    4341 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:16:38.522484    4341 out.go:177] * Starting control plane node kubenet-891000 in cluster kubenet-891000
	I0213 15:16:38.526369    4341 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:16:38.526385    4341 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 15:16:38.526393    4341 cache.go:56] Caching tarball of preloaded images
	I0213 15:16:38.526446    4341 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:16:38.526451    4341 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 15:16:38.526512    4341 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/kubenet-891000/config.json ...
	I0213 15:16:38.526522    4341 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/kubenet-891000/config.json: {Name:mkcc671a5d39735ae8526152d5b0b546a602fae7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:16:38.526725    4341 start.go:365] acquiring machines lock for kubenet-891000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:16:38.526754    4341 start.go:369] acquired machines lock for "kubenet-891000" in 24.375µs
	I0213 15:16:38.526766    4341 start.go:93] Provisioning new machine with config: &{Name:kubenet-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:16:38.526792    4341 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:16:38.535419    4341 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0213 15:16:38.549627    4341 start.go:159] libmachine.API.Create for "kubenet-891000" (driver="qemu2")
	I0213 15:16:38.549655    4341 client.go:168] LocalClient.Create starting
	I0213 15:16:38.549713    4341 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:16:38.549741    4341 main.go:141] libmachine: Decoding PEM data...
	I0213 15:16:38.549756    4341 main.go:141] libmachine: Parsing certificate...
	I0213 15:16:38.549799    4341 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:16:38.549821    4341 main.go:141] libmachine: Decoding PEM data...
	I0213 15:16:38.549829    4341 main.go:141] libmachine: Parsing certificate...
	I0213 15:16:38.550183    4341 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:16:38.672666    4341 main.go:141] libmachine: Creating SSH key...
	I0213 15:16:38.732484    4341 main.go:141] libmachine: Creating Disk image...
	I0213 15:16:38.732491    4341 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:16:38.732685    4341 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubenet-891000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubenet-891000/disk.qcow2
	I0213 15:16:38.744862    4341 main.go:141] libmachine: STDOUT: 
	I0213 15:16:38.744884    4341 main.go:141] libmachine: STDERR: 
	I0213 15:16:38.744933    4341 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubenet-891000/disk.qcow2 +20000M
	I0213 15:16:38.755905    4341 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:16:38.755921    4341 main.go:141] libmachine: STDERR: 
	I0213 15:16:38.755937    4341 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubenet-891000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubenet-891000/disk.qcow2
	I0213 15:16:38.755943    4341 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:16:38.756007    4341 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubenet-891000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubenet-891000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubenet-891000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:0d:99:65:73:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubenet-891000/disk.qcow2
	I0213 15:16:38.757726    4341 main.go:141] libmachine: STDOUT: 
	I0213 15:16:38.757757    4341 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:16:38.757776    4341 client.go:171] LocalClient.Create took 208.118958ms
	I0213 15:16:40.759905    4341 start.go:128] duration metric: createHost completed in 2.233149833s
	I0213 15:16:40.759939    4341 start.go:83] releasing machines lock for "kubenet-891000", held for 2.233225125s
	W0213 15:16:40.759985    4341 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:16:40.770974    4341 out.go:177] * Deleting "kubenet-891000" in qemu2 ...
	W0213 15:16:40.786267    4341 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:16:40.786289    4341 start.go:709] Will try again in 5 seconds ...
	I0213 15:16:45.788400    4341 start.go:365] acquiring machines lock for kubenet-891000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:16:45.788915    4341 start.go:369] acquired machines lock for "kubenet-891000" in 407.292µs
	I0213 15:16:45.789056    4341 start.go:93] Provisioning new machine with config: &{Name:kubenet-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:16:45.789392    4341 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:16:45.794999    4341 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0213 15:16:45.842501    4341 start.go:159] libmachine.API.Create for "kubenet-891000" (driver="qemu2")
	I0213 15:16:45.842556    4341 client.go:168] LocalClient.Create starting
	I0213 15:16:45.842721    4341 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:16:45.842804    4341 main.go:141] libmachine: Decoding PEM data...
	I0213 15:16:45.842824    4341 main.go:141] libmachine: Parsing certificate...
	I0213 15:16:45.842890    4341 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:16:45.842942    4341 main.go:141] libmachine: Decoding PEM data...
	I0213 15:16:45.842958    4341 main.go:141] libmachine: Parsing certificate...
	I0213 15:16:45.843495    4341 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:16:45.976328    4341 main.go:141] libmachine: Creating SSH key...
	I0213 15:16:46.024436    4341 main.go:141] libmachine: Creating Disk image...
	I0213 15:16:46.024441    4341 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:16:46.024650    4341 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubenet-891000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubenet-891000/disk.qcow2
	I0213 15:16:46.037347    4341 main.go:141] libmachine: STDOUT: 
	I0213 15:16:46.037369    4341 main.go:141] libmachine: STDERR: 
	I0213 15:16:46.037432    4341 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubenet-891000/disk.qcow2 +20000M
	I0213 15:16:46.048514    4341 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:16:46.048534    4341 main.go:141] libmachine: STDERR: 
	I0213 15:16:46.048546    4341 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubenet-891000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubenet-891000/disk.qcow2
	I0213 15:16:46.048551    4341 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:16:46.048593    4341 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubenet-891000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubenet-891000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubenet-891000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:f6:bf:d6:6a:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/kubenet-891000/disk.qcow2
	I0213 15:16:46.050342    4341 main.go:141] libmachine: STDOUT: 
	I0213 15:16:46.050362    4341 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:16:46.050375    4341 client.go:171] LocalClient.Create took 207.817792ms
	I0213 15:16:48.052583    4341 start.go:128] duration metric: createHost completed in 2.263206s
	I0213 15:16:48.052657    4341 start.go:83] releasing machines lock for "kubenet-891000", held for 2.263769125s
	W0213 15:16:48.052975    4341 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:16:48.063534    4341 out.go:177] 
	W0213 15:16:48.067663    4341 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:16:48.067712    4341 out.go:239] * 
	* 
	W0213 15:16:48.069531    4341 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:16:48.079410    4341 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-891000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-891000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.800975375s)

                                                
                                                
-- stdout --
	* [custom-flannel-891000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-891000 in cluster custom-flannel-891000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-891000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:16:50.384296    4453 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:16:50.384415    4453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:16:50.384417    4453 out.go:304] Setting ErrFile to fd 2...
	I0213 15:16:50.384420    4453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:16:50.384553    4453 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:16:50.385620    4453 out.go:298] Setting JSON to false
	I0213 15:16:50.402175    4453 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2632,"bootTime":1707863578,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:16:50.402259    4453 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:16:50.408079    4453 out.go:177] * [custom-flannel-891000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:16:50.415096    4453 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:16:50.420100    4453 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:16:50.415166    4453 notify.go:220] Checking for updates...
	I0213 15:16:50.426054    4453 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:16:50.429049    4453 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:16:50.432094    4453 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:16:50.433439    4453 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:16:50.436447    4453 config.go:182] Loaded profile config "multinode-078000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:16:50.436511    4453 config.go:182] Loaded profile config "stopped-upgrade-809000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0213 15:16:50.436559    4453 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:16:50.441054    4453 out.go:177] * Using the qemu2 driver based on user configuration
	I0213 15:16:50.446028    4453 start.go:298] selected driver: qemu2
	I0213 15:16:50.446033    4453 start.go:902] validating driver "qemu2" against <nil>
	I0213 15:16:50.446040    4453 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:16:50.448374    4453 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:16:50.451041    4453 out.go:177] * Automatically selected the socket_vmnet network
	I0213 15:16:50.454113    4453 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:16:50.454142    4453 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0213 15:16:50.454165    4453 start_flags.go:316] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0213 15:16:50.454173    4453 start_flags.go:321] config:
	{Name:custom-flannel-891000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:16:50.458481    4453 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:16:50.466065    4453 out.go:177] * Starting control plane node custom-flannel-891000 in cluster custom-flannel-891000
	I0213 15:16:50.470051    4453 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:16:50.470069    4453 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 15:16:50.470078    4453 cache.go:56] Caching tarball of preloaded images
	I0213 15:16:50.470133    4453 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:16:50.470139    4453 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 15:16:50.470241    4453 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/custom-flannel-891000/config.json ...
	I0213 15:16:50.470251    4453 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/custom-flannel-891000/config.json: {Name:mk5cbc693766d3ee1471283e0b7c53347ec1c5d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:16:50.470457    4453 start.go:365] acquiring machines lock for custom-flannel-891000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:16:50.470485    4453 start.go:369] acquired machines lock for "custom-flannel-891000" in 21.875µs
	I0213 15:16:50.470495    4453 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:16:50.470520    4453 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:16:50.477972    4453 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0213 15:16:50.492431    4453 start.go:159] libmachine.API.Create for "custom-flannel-891000" (driver="qemu2")
	I0213 15:16:50.492460    4453 client.go:168] LocalClient.Create starting
	I0213 15:16:50.492524    4453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:16:50.492552    4453 main.go:141] libmachine: Decoding PEM data...
	I0213 15:16:50.492567    4453 main.go:141] libmachine: Parsing certificate...
	I0213 15:16:50.492607    4453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:16:50.492628    4453 main.go:141] libmachine: Decoding PEM data...
	I0213 15:16:50.492633    4453 main.go:141] libmachine: Parsing certificate...
	I0213 15:16:50.493046    4453 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:16:50.618171    4453 main.go:141] libmachine: Creating SSH key...
	I0213 15:16:50.689516    4453 main.go:141] libmachine: Creating Disk image...
	I0213 15:16:50.689525    4453 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:16:50.689724    4453 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/custom-flannel-891000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/custom-flannel-891000/disk.qcow2
	I0213 15:16:50.702345    4453 main.go:141] libmachine: STDOUT: 
	I0213 15:16:50.702367    4453 main.go:141] libmachine: STDERR: 
	I0213 15:16:50.702441    4453 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/custom-flannel-891000/disk.qcow2 +20000M
	I0213 15:16:50.713498    4453 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:16:50.713517    4453 main.go:141] libmachine: STDERR: 
	I0213 15:16:50.713537    4453 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/custom-flannel-891000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/custom-flannel-891000/disk.qcow2
	I0213 15:16:50.713546    4453 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:16:50.713579    4453 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/custom-flannel-891000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/custom-flannel-891000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/custom-flannel-891000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:14:bc:66:53:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/custom-flannel-891000/disk.qcow2
	I0213 15:16:50.715358    4453 main.go:141] libmachine: STDOUT: 
	I0213 15:16:50.715376    4453 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:16:50.715397    4453 client.go:171] LocalClient.Create took 222.9375ms
	I0213 15:16:52.717648    4453 start.go:128] duration metric: createHost completed in 2.247125791s
	I0213 15:16:52.717732    4453 start.go:83] releasing machines lock for "custom-flannel-891000", held for 2.247287167s
	W0213 15:16:52.717777    4453 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:16:52.728645    4453 out.go:177] * Deleting "custom-flannel-891000" in qemu2 ...
	W0213 15:16:52.747634    4453 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:16:52.747687    4453 start.go:709] Will try again in 5 seconds ...
	I0213 15:16:57.749768    4453 start.go:365] acquiring machines lock for custom-flannel-891000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:16:57.750263    4453 start.go:369] acquired machines lock for "custom-flannel-891000" in 407.792µs
	I0213 15:16:57.750416    4453 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:16:57.750740    4453 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:16:57.759177    4453 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0213 15:16:57.808772    4453 start.go:159] libmachine.API.Create for "custom-flannel-891000" (driver="qemu2")
	I0213 15:16:57.808819    4453 client.go:168] LocalClient.Create starting
	I0213 15:16:57.808996    4453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:16:57.809064    4453 main.go:141] libmachine: Decoding PEM data...
	I0213 15:16:57.809089    4453 main.go:141] libmachine: Parsing certificate...
	I0213 15:16:57.809162    4453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:16:57.809210    4453 main.go:141] libmachine: Decoding PEM data...
	I0213 15:16:57.809224    4453 main.go:141] libmachine: Parsing certificate...
	I0213 15:16:57.809760    4453 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:16:57.943799    4453 main.go:141] libmachine: Creating SSH key...
	I0213 15:16:58.087145    4453 main.go:141] libmachine: Creating Disk image...
	I0213 15:16:58.087157    4453 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:16:58.087359    4453 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/custom-flannel-891000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/custom-flannel-891000/disk.qcow2
	I0213 15:16:58.099846    4453 main.go:141] libmachine: STDOUT: 
	I0213 15:16:58.099877    4453 main.go:141] libmachine: STDERR: 
	I0213 15:16:58.099945    4453 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/custom-flannel-891000/disk.qcow2 +20000M
	I0213 15:16:58.110825    4453 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:16:58.110845    4453 main.go:141] libmachine: STDERR: 
	I0213 15:16:58.110863    4453 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/custom-flannel-891000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/custom-flannel-891000/disk.qcow2
	I0213 15:16:58.110868    4453 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:16:58.110909    4453 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/custom-flannel-891000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/custom-flannel-891000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/custom-flannel-891000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:74:37:c1:6a:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/custom-flannel-891000/disk.qcow2
	I0213 15:16:58.112679    4453 main.go:141] libmachine: STDOUT: 
	I0213 15:16:58.112695    4453 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:16:58.112710    4453 client.go:171] LocalClient.Create took 303.890167ms
	I0213 15:17:00.115021    4453 start.go:128] duration metric: createHost completed in 2.364233875s
	I0213 15:17:00.115177    4453 start.go:83] releasing machines lock for "custom-flannel-891000", held for 2.364932625s
	W0213 15:17:00.115640    4453 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:17:00.125299    4453 out.go:177] 
	W0213 15:17:00.129355    4453 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:17:00.129383    4453 out.go:239] * 
	* 
	W0213 15:17:00.132470    4453 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:17:00.143424    4453 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-891000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-891000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.700668167s)

                                                
                                                
-- stdout --
	* [calico-891000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-891000 in cluster calico-891000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-891000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:17:02.632460    4577 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:17:02.632579    4577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:17:02.632582    4577 out.go:304] Setting ErrFile to fd 2...
	I0213 15:17:02.632585    4577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:17:02.632711    4577 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:17:02.633778    4577 out.go:298] Setting JSON to false
	I0213 15:17:02.650036    4577 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2644,"bootTime":1707863578,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:17:02.650135    4577 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:17:02.655448    4577 out.go:177] * [calico-891000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:17:02.663442    4577 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:17:02.666312    4577 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:17:02.663537    4577 notify.go:220] Checking for updates...
	I0213 15:17:02.672403    4577 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:17:02.673850    4577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:17:02.677409    4577 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:17:02.680434    4577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:17:02.683713    4577 config.go:182] Loaded profile config "multinode-078000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:17:02.683787    4577 config.go:182] Loaded profile config "stopped-upgrade-809000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0213 15:17:02.683837    4577 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:17:02.688411    4577 out.go:177] * Using the qemu2 driver based on user configuration
	I0213 15:17:02.695391    4577 start.go:298] selected driver: qemu2
	I0213 15:17:02.695396    4577 start.go:902] validating driver "qemu2" against <nil>
	I0213 15:17:02.695401    4577 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:17:02.697599    4577 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:17:02.701369    4577 out.go:177] * Automatically selected the socket_vmnet network
	I0213 15:17:02.704462    4577 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:17:02.704493    4577 cni.go:84] Creating CNI manager for "calico"
	I0213 15:17:02.704498    4577 start_flags.go:316] Found "Calico" CNI - setting NetworkPlugin=cni
	I0213 15:17:02.704504    4577 start_flags.go:321] config:
	{Name:calico-891000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs:}
	I0213 15:17:02.708688    4577 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:17:02.715387    4577 out.go:177] * Starting control plane node calico-891000 in cluster calico-891000
	I0213 15:17:02.719349    4577 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:17:02.719365    4577 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 15:17:02.719372    4577 cache.go:56] Caching tarball of preloaded images
	I0213 15:17:02.719424    4577 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:17:02.719429    4577 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 15:17:02.719491    4577 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/calico-891000/config.json ...
	I0213 15:17:02.719501    4577 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/calico-891000/config.json: {Name:mk4da2c86c48d15688d0ebbf8045e746ce45ab4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:17:02.719701    4577 start.go:365] acquiring machines lock for calico-891000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:17:02.719729    4577 start.go:369] acquired machines lock for "calico-891000" in 22.792µs
	I0213 15:17:02.719739    4577 start.go:93] Provisioning new machine with config: &{Name:calico-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:calico-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:17:02.719771    4577 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:17:02.727452    4577 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0213 15:17:02.743012    4577 start.go:159] libmachine.API.Create for "calico-891000" (driver="qemu2")
	I0213 15:17:02.743044    4577 client.go:168] LocalClient.Create starting
	I0213 15:17:02.743105    4577 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:17:02.743139    4577 main.go:141] libmachine: Decoding PEM data...
	I0213 15:17:02.743149    4577 main.go:141] libmachine: Parsing certificate...
	I0213 15:17:02.743191    4577 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:17:02.743215    4577 main.go:141] libmachine: Decoding PEM data...
	I0213 15:17:02.743225    4577 main.go:141] libmachine: Parsing certificate...
	I0213 15:17:02.743573    4577 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:17:02.865806    4577 main.go:141] libmachine: Creating SSH key...
	I0213 15:17:02.935761    4577 main.go:141] libmachine: Creating Disk image...
	I0213 15:17:02.935767    4577 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:17:02.935950    4577 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/calico-891000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/calico-891000/disk.qcow2
	I0213 15:17:02.948114    4577 main.go:141] libmachine: STDOUT: 
	I0213 15:17:02.948143    4577 main.go:141] libmachine: STDERR: 
	I0213 15:17:02.948199    4577 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/calico-891000/disk.qcow2 +20000M
	I0213 15:17:02.959689    4577 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:17:02.959752    4577 main.go:141] libmachine: STDERR: 
	I0213 15:17:02.959772    4577 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/calico-891000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/calico-891000/disk.qcow2
	I0213 15:17:02.959776    4577 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:17:02.959808    4577 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/calico-891000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/calico-891000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/calico-891000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:ac:ef:f4:aa:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/calico-891000/disk.qcow2
	I0213 15:17:02.961645    4577 main.go:141] libmachine: STDOUT: 
	I0213 15:17:02.961667    4577 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:17:02.961688    4577 client.go:171] LocalClient.Create took 218.642708ms
	I0213 15:17:04.963926    4577 start.go:128] duration metric: createHost completed in 2.244172s
	I0213 15:17:04.964012    4577 start.go:83] releasing machines lock for "calico-891000", held for 2.244323375s
	W0213 15:17:04.964056    4577 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:17:04.975684    4577 out.go:177] * Deleting "calico-891000" in qemu2 ...
	W0213 15:17:04.996056    4577 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:17:04.996088    4577 start.go:709] Will try again in 5 seconds ...
	I0213 15:17:09.998191    4577 start.go:365] acquiring machines lock for calico-891000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:17:09.998673    4577 start.go:369] acquired machines lock for "calico-891000" in 383.208µs
	I0213 15:17:09.998834    4577 start.go:93] Provisioning new machine with config: &{Name:calico-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:calico-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:17:09.999145    4577 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:17:10.003857    4577 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0213 15:17:10.052042    4577 start.go:159] libmachine.API.Create for "calico-891000" (driver="qemu2")
	I0213 15:17:10.052097    4577 client.go:168] LocalClient.Create starting
	I0213 15:17:10.052207    4577 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:17:10.052272    4577 main.go:141] libmachine: Decoding PEM data...
	I0213 15:17:10.052288    4577 main.go:141] libmachine: Parsing certificate...
	I0213 15:17:10.052350    4577 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:17:10.052398    4577 main.go:141] libmachine: Decoding PEM data...
	I0213 15:17:10.052414    4577 main.go:141] libmachine: Parsing certificate...
	I0213 15:17:10.053057    4577 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:17:10.188063    4577 main.go:141] libmachine: Creating SSH key...
	I0213 15:17:10.234943    4577 main.go:141] libmachine: Creating Disk image...
	I0213 15:17:10.234949    4577 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:17:10.235149    4577 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/calico-891000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/calico-891000/disk.qcow2
	I0213 15:17:10.248068    4577 main.go:141] libmachine: STDOUT: 
	I0213 15:17:10.248089    4577 main.go:141] libmachine: STDERR: 
	I0213 15:17:10.248153    4577 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/calico-891000/disk.qcow2 +20000M
	I0213 15:17:10.259387    4577 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:17:10.259405    4577 main.go:141] libmachine: STDERR: 
	I0213 15:17:10.259418    4577 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/calico-891000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/calico-891000/disk.qcow2
	I0213 15:17:10.259424    4577 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:17:10.259481    4577 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/calico-891000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/calico-891000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/calico-891000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:d0:e3:45:54:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/calico-891000/disk.qcow2
	I0213 15:17:10.261296    4577 main.go:141] libmachine: STDOUT: 
	I0213 15:17:10.261312    4577 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:17:10.261324    4577 client.go:171] LocalClient.Create took 209.225ms
	I0213 15:17:12.262171    4577 start.go:128] duration metric: createHost completed in 2.263054s
	I0213 15:17:12.262216    4577 start.go:83] releasing machines lock for "calico-891000", held for 2.263570875s
	W0213 15:17:12.262428    4577 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:17:12.271893    4577 out.go:177] 
	W0213 15:17:12.274783    4577 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:17:12.274796    4577 out.go:239] * 
	* 
	W0213 15:17:12.276144    4577 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:17:12.291811    4577 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-891000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-891000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.793989708s)

                                                
                                                
-- stdout --
	* [false-891000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-891000 in cluster false-891000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-891000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:17:14.773633    4702 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:17:14.773789    4702 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:17:14.773792    4702 out.go:304] Setting ErrFile to fd 2...
	I0213 15:17:14.773795    4702 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:17:14.773917    4702 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:17:14.774985    4702 out.go:298] Setting JSON to false
	I0213 15:17:14.791298    4702 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2656,"bootTime":1707863578,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:17:14.791448    4702 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:17:14.796712    4702 out.go:177] * [false-891000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:17:14.804701    4702 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:17:14.804761    4702 notify.go:220] Checking for updates...
	I0213 15:17:14.808667    4702 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:17:14.811673    4702 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:17:14.814628    4702 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:17:14.817693    4702 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:17:14.820695    4702 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:17:14.824015    4702 config.go:182] Loaded profile config "multinode-078000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:17:14.824104    4702 config.go:182] Loaded profile config "stopped-upgrade-809000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0213 15:17:14.824147    4702 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:17:14.828630    4702 out.go:177] * Using the qemu2 driver based on user configuration
	I0213 15:17:14.835621    4702 start.go:298] selected driver: qemu2
	I0213 15:17:14.835626    4702 start.go:902] validating driver "qemu2" against <nil>
	I0213 15:17:14.835630    4702 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:17:14.837886    4702 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:17:14.840620    4702 out.go:177] * Automatically selected the socket_vmnet network
	I0213 15:17:14.843756    4702 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:17:14.843783    4702 cni.go:84] Creating CNI manager for "false"
	I0213 15:17:14.843805    4702 start_flags.go:321] config:
	{Name:false-891000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:false-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs:}
	I0213 15:17:14.848341    4702 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:17:14.855635    4702 out.go:177] * Starting control plane node false-891000 in cluster false-891000
	I0213 15:17:14.859666    4702 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:17:14.859681    4702 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 15:17:14.859691    4702 cache.go:56] Caching tarball of preloaded images
	I0213 15:17:14.859751    4702 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:17:14.859757    4702 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 15:17:14.859847    4702 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/false-891000/config.json ...
	I0213 15:17:14.859859    4702 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/false-891000/config.json: {Name:mkc8aa4afd01b5339a8c92e042bd21a707f064e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:17:14.860074    4702 start.go:365] acquiring machines lock for false-891000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:17:14.860104    4702 start.go:369] acquired machines lock for "false-891000" in 25.125µs
	I0213 15:17:14.860116    4702 start.go:93] Provisioning new machine with config: &{Name:false-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:false-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:17:14.860148    4702 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:17:14.868698    4702 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0213 15:17:14.885557    4702 start.go:159] libmachine.API.Create for "false-891000" (driver="qemu2")
	I0213 15:17:14.885595    4702 client.go:168] LocalClient.Create starting
	I0213 15:17:14.885663    4702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:17:14.885692    4702 main.go:141] libmachine: Decoding PEM data...
	I0213 15:17:14.885703    4702 main.go:141] libmachine: Parsing certificate...
	I0213 15:17:14.885743    4702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:17:14.885765    4702 main.go:141] libmachine: Decoding PEM data...
	I0213 15:17:14.885778    4702 main.go:141] libmachine: Parsing certificate...
	I0213 15:17:14.886148    4702 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:17:15.012976    4702 main.go:141] libmachine: Creating SSH key...
	I0213 15:17:15.191233    4702 main.go:141] libmachine: Creating Disk image...
	I0213 15:17:15.191243    4702 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:17:15.191441    4702 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/false-891000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/false-891000/disk.qcow2
	I0213 15:17:15.203963    4702 main.go:141] libmachine: STDOUT: 
	I0213 15:17:15.203982    4702 main.go:141] libmachine: STDERR: 
	I0213 15:17:15.204034    4702 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/false-891000/disk.qcow2 +20000M
	I0213 15:17:15.215378    4702 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:17:15.215407    4702 main.go:141] libmachine: STDERR: 
	I0213 15:17:15.215426    4702 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/false-891000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/false-891000/disk.qcow2
	I0213 15:17:15.215432    4702 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:17:15.215470    4702 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/false-891000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/false-891000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/false-891000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:eb:65:3d:e8:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/false-891000/disk.qcow2
	I0213 15:17:15.217273    4702 main.go:141] libmachine: STDOUT: 
	I0213 15:17:15.217291    4702 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:17:15.217311    4702 client.go:171] LocalClient.Create took 331.7165ms
	I0213 15:17:17.219511    4702 start.go:128] duration metric: createHost completed in 2.35939175s
	I0213 15:17:17.219597    4702 start.go:83] releasing machines lock for "false-891000", held for 2.359535583s
	W0213 15:17:17.219635    4702 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:17:17.229265    4702 out.go:177] * Deleting "false-891000" in qemu2 ...
	W0213 15:17:17.246909    4702 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:17:17.246938    4702 start.go:709] Will try again in 5 seconds ...
	I0213 15:17:22.249052    4702 start.go:365] acquiring machines lock for false-891000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:17:22.249515    4702 start.go:369] acquired machines lock for "false-891000" in 343.5µs
	I0213 15:17:22.249629    4702 start.go:93] Provisioning new machine with config: &{Name:false-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:false-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:17:22.255272    4702 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:17:22.259636    4702 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0213 15:17:22.296516    4702 start.go:159] libmachine.API.Create for "false-891000" (driver="qemu2")
	I0213 15:17:22.296574    4702 client.go:168] LocalClient.Create starting
	I0213 15:17:22.296687    4702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:17:22.296740    4702 main.go:141] libmachine: Decoding PEM data...
	I0213 15:17:22.296754    4702 main.go:141] libmachine: Parsing certificate...
	I0213 15:17:22.296815    4702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:17:22.296850    4702 main.go:141] libmachine: Decoding PEM data...
	I0213 15:17:22.296864    4702 main.go:141] libmachine: Parsing certificate...
	I0213 15:17:22.297295    4702 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:17:22.426583    4702 main.go:141] libmachine: Creating SSH key...
	I0213 15:17:22.470437    4702 main.go:141] libmachine: Creating Disk image...
	I0213 15:17:22.470445    4702 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:17:22.470640    4702 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/false-891000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/false-891000/disk.qcow2
	I0213 15:17:22.483287    4702 main.go:141] libmachine: STDOUT: 
	I0213 15:17:22.483316    4702 main.go:141] libmachine: STDERR: 
	I0213 15:17:22.483384    4702 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/false-891000/disk.qcow2 +20000M
	I0213 15:17:22.494310    4702 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:17:22.494336    4702 main.go:141] libmachine: STDERR: 
	I0213 15:17:22.494353    4702 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/false-891000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/false-891000/disk.qcow2
	I0213 15:17:22.494358    4702 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:17:22.494404    4702 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/false-891000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/false-891000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/false-891000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:c8:f0:ff:c9:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/false-891000/disk.qcow2
	I0213 15:17:22.496171    4702 main.go:141] libmachine: STDOUT: 
	I0213 15:17:22.496199    4702 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:17:22.496213    4702 client.go:171] LocalClient.Create took 199.63425ms
	I0213 15:17:24.498401    4702 start.go:128] duration metric: createHost completed in 2.243106917s
	I0213 15:17:24.498488    4702 start.go:83] releasing machines lock for "false-891000", held for 2.2490005s
	W0213 15:17:24.498973    4702 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:17:24.508639    4702 out.go:177] 
	W0213 15:17:24.512657    4702 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:17:24.512692    4702 out.go:239] * 
	* 
	W0213 15:17:24.515549    4702 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:17:24.524618    4702 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-417000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-417000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (9.826067417s)

                                                
                                                
-- stdout --
	* [old-k8s-version-417000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-417000 in cluster old-k8s-version-417000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-417000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:17:26.795607    4815 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:17:26.795758    4815 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:17:26.795761    4815 out.go:304] Setting ErrFile to fd 2...
	I0213 15:17:26.795764    4815 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:17:26.795917    4815 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:17:26.797176    4815 out.go:298] Setting JSON to false
	I0213 15:17:26.814018    4815 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2668,"bootTime":1707863578,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:17:26.814083    4815 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:17:26.818607    4815 out.go:177] * [old-k8s-version-417000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:17:26.825430    4815 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:17:26.829447    4815 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:17:26.825570    4815 notify.go:220] Checking for updates...
	I0213 15:17:26.835480    4815 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:17:26.838450    4815 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:17:26.841367    4815 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:17:26.844427    4815 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:17:26.847846    4815 config.go:182] Loaded profile config "multinode-078000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:17:26.847913    4815 config.go:182] Loaded profile config "stopped-upgrade-809000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0213 15:17:26.847963    4815 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:17:26.852339    4815 out.go:177] * Using the qemu2 driver based on user configuration
	I0213 15:17:26.859469    4815 start.go:298] selected driver: qemu2
	I0213 15:17:26.859475    4815 start.go:902] validating driver "qemu2" against <nil>
	I0213 15:17:26.859480    4815 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:17:26.861774    4815 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:17:26.865362    4815 out.go:177] * Automatically selected the socket_vmnet network
	I0213 15:17:26.868462    4815 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:17:26.868514    4815 cni.go:84] Creating CNI manager for ""
	I0213 15:17:26.868522    4815 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 15:17:26.868528    4815 start_flags.go:321] config:
	{Name:old-k8s-version-417000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-417000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs:}
	I0213 15:17:26.873233    4815 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:17:26.880437    4815 out.go:177] * Starting control plane node old-k8s-version-417000 in cluster old-k8s-version-417000
	I0213 15:17:26.884401    4815 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 15:17:26.884424    4815 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0213 15:17:26.884433    4815 cache.go:56] Caching tarball of preloaded images
	I0213 15:17:26.884533    4815 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:17:26.884538    4815 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0213 15:17:26.884607    4815 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/old-k8s-version-417000/config.json ...
	I0213 15:17:26.884618    4815 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/old-k8s-version-417000/config.json: {Name:mk76b9933dec1f2f6479827392c3b3f4e1e8a1a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:17:26.884828    4815 start.go:365] acquiring machines lock for old-k8s-version-417000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:17:26.884864    4815 start.go:369] acquired machines lock for "old-k8s-version-417000" in 27µs
	I0213 15:17:26.884879    4815 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-417000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-417000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:17:26.884912    4815 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:17:26.905475    4815 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0213 15:17:26.923691    4815 start.go:159] libmachine.API.Create for "old-k8s-version-417000" (driver="qemu2")
	I0213 15:17:26.923720    4815 client.go:168] LocalClient.Create starting
	I0213 15:17:26.923794    4815 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:17:26.923824    4815 main.go:141] libmachine: Decoding PEM data...
	I0213 15:17:26.923833    4815 main.go:141] libmachine: Parsing certificate...
	I0213 15:17:26.923873    4815 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:17:26.923897    4815 main.go:141] libmachine: Decoding PEM data...
	I0213 15:17:26.923905    4815 main.go:141] libmachine: Parsing certificate...
	I0213 15:17:26.924285    4815 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:17:27.047055    4815 main.go:141] libmachine: Creating SSH key...
	I0213 15:17:27.207747    4815 main.go:141] libmachine: Creating Disk image...
	I0213 15:17:27.207755    4815 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:17:27.207959    4815 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/disk.qcow2
	I0213 15:17:27.220718    4815 main.go:141] libmachine: STDOUT: 
	I0213 15:17:27.220737    4815 main.go:141] libmachine: STDERR: 
	I0213 15:17:27.220812    4815 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/disk.qcow2 +20000M
	I0213 15:17:27.231770    4815 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:17:27.231792    4815 main.go:141] libmachine: STDERR: 
	I0213 15:17:27.231808    4815 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/disk.qcow2
	I0213 15:17:27.231819    4815 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:17:27.231850    4815 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:8f:38:57:88:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/disk.qcow2
	I0213 15:17:27.233651    4815 main.go:141] libmachine: STDOUT: 
	I0213 15:17:27.233674    4815 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:17:27.233695    4815 client.go:171] LocalClient.Create took 309.976625ms
	I0213 15:17:29.235999    4815 start.go:128] duration metric: createHost completed in 2.351088416s
	I0213 15:17:29.236100    4815 start.go:83] releasing machines lock for "old-k8s-version-417000", held for 2.351276583s
	W0213 15:17:29.236175    4815 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:17:29.247482    4815 out.go:177] * Deleting "old-k8s-version-417000" in qemu2 ...
	W0213 15:17:29.267788    4815 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:17:29.267835    4815 start.go:709] Will try again in 5 seconds ...
	I0213 15:17:34.269908    4815 start.go:365] acquiring machines lock for old-k8s-version-417000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:17:34.270307    4815 start.go:369] acquired machines lock for "old-k8s-version-417000" in 267.625µs
	I0213 15:17:34.270356    4815 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-417000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-417000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:17:34.270517    4815 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:17:34.275050    4815 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0213 15:17:34.315705    4815 start.go:159] libmachine.API.Create for "old-k8s-version-417000" (driver="qemu2")
	I0213 15:17:34.315775    4815 client.go:168] LocalClient.Create starting
	I0213 15:17:34.315915    4815 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:17:34.315983    4815 main.go:141] libmachine: Decoding PEM data...
	I0213 15:17:34.316000    4815 main.go:141] libmachine: Parsing certificate...
	I0213 15:17:34.316079    4815 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:17:34.316128    4815 main.go:141] libmachine: Decoding PEM data...
	I0213 15:17:34.316139    4815 main.go:141] libmachine: Parsing certificate...
	I0213 15:17:34.316691    4815 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:17:34.448586    4815 main.go:141] libmachine: Creating SSH key...
	I0213 15:17:34.520510    4815 main.go:141] libmachine: Creating Disk image...
	I0213 15:17:34.520518    4815 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:17:34.520715    4815 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/disk.qcow2
	I0213 15:17:34.533330    4815 main.go:141] libmachine: STDOUT: 
	I0213 15:17:34.533352    4815 main.go:141] libmachine: STDERR: 
	I0213 15:17:34.533434    4815 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/disk.qcow2 +20000M
	I0213 15:17:34.544712    4815 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:17:34.544732    4815 main.go:141] libmachine: STDERR: 
	I0213 15:17:34.544752    4815 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/disk.qcow2
	I0213 15:17:34.544758    4815 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:17:34.544799    4815 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:40:fe:ae:bb:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/disk.qcow2
	I0213 15:17:34.546555    4815 main.go:141] libmachine: STDOUT: 
	I0213 15:17:34.546568    4815 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:17:34.546585    4815 client.go:171] LocalClient.Create took 230.810167ms
	I0213 15:17:36.548734    4815 start.go:128] duration metric: createHost completed in 2.278231834s
	I0213 15:17:36.548832    4815 start.go:83] releasing machines lock for "old-k8s-version-417000", held for 2.278556167s
	W0213 15:17:36.549184    4815 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-417000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-417000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:17:36.559844    4815 out.go:177] 
	W0213 15:17:36.563881    4815 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:17:36.563901    4815 out.go:239] * 
	* 
	W0213 15:17:36.565795    4815 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:17:36.576862    4815 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-417000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-417000 -n old-k8s-version-417000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-417000 -n old-k8s-version-417000: exit status 7 (62.516125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-417000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-417000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-417000 create -f testdata/busybox.yaml: exit status 1 (29.2085ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-417000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-417000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-417000 -n old-k8s-version-417000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-417000 -n old-k8s-version-417000: exit status 7 (31.601542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-417000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-417000 -n old-k8s-version-417000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-417000 -n old-k8s-version-417000: exit status 7 (31.444208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-417000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-417000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-417000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-417000 describe deploy/metrics-server -n kube-system: exit status 1 (28.338917ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-417000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-417000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-417000 -n old-k8s-version-417000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-417000 -n old-k8s-version-417000: exit status 7 (32.29925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-417000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-417000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-417000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (5.1956805s)

                                                
                                                
-- stdout --
	* [old-k8s-version-417000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-417000 in cluster old-k8s-version-417000
	* Restarting existing qemu2 VM for "old-k8s-version-417000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-417000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:17:37.062135    4851 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:17:37.062274    4851 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:17:37.062277    4851 out.go:304] Setting ErrFile to fd 2...
	I0213 15:17:37.062279    4851 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:17:37.062417    4851 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:17:37.063420    4851 out.go:298] Setting JSON to false
	I0213 15:17:37.079863    4851 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2679,"bootTime":1707863578,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:17:37.079971    4851 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:17:37.084539    4851 out.go:177] * [old-k8s-version-417000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:17:37.091503    4851 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:17:37.095510    4851 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:17:37.091559    4851 notify.go:220] Checking for updates...
	I0213 15:17:37.099398    4851 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:17:37.102496    4851 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:17:37.105522    4851 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:17:37.108465    4851 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:17:37.111794    4851 config.go:182] Loaded profile config "old-k8s-version-417000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0213 15:17:37.116474    4851 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0213 15:17:37.119484    4851 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:17:37.124505    4851 out.go:177] * Using the qemu2 driver based on existing profile
	I0213 15:17:37.131501    4851 start.go:298] selected driver: qemu2
	I0213 15:17:37.131505    4851 start.go:902] validating driver "qemu2" against &{Name:old-k8s-version-417000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-417000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:17:37.131550    4851 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:17:37.133868    4851 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:17:37.133919    4851 cni.go:84] Creating CNI manager for ""
	I0213 15:17:37.133927    4851 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 15:17:37.133931    4851 start_flags.go:321] config:
	{Name:old-k8s-version-417000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-417000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:17:37.138247    4851 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:17:37.146469    4851 out.go:177] * Starting control plane node old-k8s-version-417000 in cluster old-k8s-version-417000
	I0213 15:17:37.150538    4851 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 15:17:37.150555    4851 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0213 15:17:37.150566    4851 cache.go:56] Caching tarball of preloaded images
	I0213 15:17:37.150627    4851 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:17:37.150633    4851 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0213 15:17:37.150707    4851 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/old-k8s-version-417000/config.json ...
	I0213 15:17:37.151215    4851 start.go:365] acquiring machines lock for old-k8s-version-417000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:17:37.151240    4851 start.go:369] acquired machines lock for "old-k8s-version-417000" in 19.584µs
	I0213 15:17:37.151248    4851 start.go:96] Skipping create...Using existing machine configuration
	I0213 15:17:37.151254    4851 fix.go:54] fixHost starting: 
	I0213 15:17:37.151370    4851 fix.go:102] recreateIfNeeded on old-k8s-version-417000: state=Stopped err=<nil>
	W0213 15:17:37.151378    4851 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 15:17:37.154415    4851 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-417000" ...
	I0213 15:17:37.162521    4851 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:40:fe:ae:bb:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/disk.qcow2
	I0213 15:17:37.164593    4851 main.go:141] libmachine: STDOUT: 
	I0213 15:17:37.164613    4851 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:17:37.164641    4851 fix.go:56] fixHost completed within 13.388291ms
	I0213 15:17:37.164645    4851 start.go:83] releasing machines lock for "old-k8s-version-417000", held for 13.4015ms
	W0213 15:17:37.164650    4851 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:17:37.164692    4851 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:17:37.164697    4851 start.go:709] Will try again in 5 seconds ...
	I0213 15:17:42.166778    4851 start.go:365] acquiring machines lock for old-k8s-version-417000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:17:42.167138    4851 start.go:369] acquired machines lock for "old-k8s-version-417000" in 276.625µs
	I0213 15:17:42.167303    4851 start.go:96] Skipping create...Using existing machine configuration
	I0213 15:17:42.167323    4851 fix.go:54] fixHost starting: 
	I0213 15:17:42.168027    4851 fix.go:102] recreateIfNeeded on old-k8s-version-417000: state=Stopped err=<nil>
	W0213 15:17:42.168054    4851 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 15:17:42.177499    4851 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-417000" ...
	I0213 15:17:42.182499    4851 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:40:fe:ae:bb:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/old-k8s-version-417000/disk.qcow2
	I0213 15:17:42.192846    4851 main.go:141] libmachine: STDOUT: 
	I0213 15:17:42.192910    4851 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:17:42.193000    4851 fix.go:56] fixHost completed within 25.676625ms
	I0213 15:17:42.193017    4851 start.go:83] releasing machines lock for "old-k8s-version-417000", held for 25.857291ms
	W0213 15:17:42.193232    4851 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-417000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-417000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:17:42.200440    4851 out.go:177] 
	W0213 15:17:42.203433    4851 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:17:42.203456    4851 out.go:239] * 
	* 
	W0213 15:17:42.205879    4851 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:17:42.213406    4851 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-417000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-417000 -n old-k8s-version-417000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-417000 -n old-k8s-version-417000: exit status 7 (65.555375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-417000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-417000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-417000 -n old-k8s-version-417000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-417000 -n old-k8s-version-417000: exit status 7 (32.829542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-417000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-417000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-417000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-417000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.970458ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-417000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-417000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-417000 -n old-k8s-version-417000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-417000 -n old-k8s-version-417000: exit status 7 (31.482417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-417000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-417000 image list --format=json
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-417000 -n old-k8s-version-417000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-417000 -n old-k8s-version-417000: exit status 7 (31.94375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-417000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-417000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-417000 --alsologtostderr -v=1: exit status 89 (45.755458ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-417000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:17:42.490029    4870 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:17:42.491029    4870 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:17:42.491042    4870 out.go:304] Setting ErrFile to fd 2...
	I0213 15:17:42.491045    4870 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:17:42.491222    4870 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:17:42.491436    4870 out.go:298] Setting JSON to false
	I0213 15:17:42.491446    4870 mustload.go:65] Loading cluster: old-k8s-version-417000
	I0213 15:17:42.491645    4870 config.go:182] Loaded profile config "old-k8s-version-417000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0213 15:17:42.496339    4870 out.go:177] * The control plane node must be running for this command
	I0213 15:17:42.500386    4870 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-417000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-417000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-417000 -n old-k8s-version-417000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-417000 -n old-k8s-version-417000: exit status 7 (31.156209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-417000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-417000 -n old-k8s-version-417000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-417000 -n old-k8s-version-417000: exit status 7 (31.914375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-417000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-843000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-843000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (9.969715292s)

                                                
                                                
-- stdout --
	* [no-preload-843000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-843000 in cluster no-preload-843000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-843000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:17:42.968236    4893 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:17:42.968385    4893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:17:42.968388    4893 out.go:304] Setting ErrFile to fd 2...
	I0213 15:17:42.968390    4893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:17:42.968514    4893 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:17:42.969563    4893 out.go:298] Setting JSON to false
	I0213 15:17:42.986084    4893 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2684,"bootTime":1707863578,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:17:42.986148    4893 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:17:42.991117    4893 out.go:177] * [no-preload-843000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:17:42.996980    4893 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:17:42.997029    4893 notify.go:220] Checking for updates...
	I0213 15:17:43.001016    4893 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:17:43.003916    4893 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:17:43.006995    4893 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:17:43.010061    4893 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:17:43.011504    4893 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:17:43.015344    4893 config.go:182] Loaded profile config "multinode-078000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:17:43.015404    4893 config.go:182] Loaded profile config "stopped-upgrade-809000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0213 15:17:43.015454    4893 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:17:43.019914    4893 out.go:177] * Using the qemu2 driver based on user configuration
	I0213 15:17:43.029035    4893 start.go:298] selected driver: qemu2
	I0213 15:17:43.029041    4893 start.go:902] validating driver "qemu2" against <nil>
	I0213 15:17:43.029047    4893 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:17:43.031347    4893 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:17:43.033948    4893 out.go:177] * Automatically selected the socket_vmnet network
	I0213 15:17:43.035295    4893 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:17:43.035332    4893 cni.go:84] Creating CNI manager for ""
	I0213 15:17:43.035339    4893 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:17:43.035345    4893 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 15:17:43.035350    4893 start_flags.go:321] config:
	{Name:no-preload-843000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-843000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs:}
	I0213 15:17:43.039790    4893 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:17:43.047056    4893 out.go:177] * Starting control plane node no-preload-843000 in cluster no-preload-843000
	I0213 15:17:43.050993    4893 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0213 15:17:43.051114    4893 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/no-preload-843000/config.json ...
	I0213 15:17:43.051112    4893 cache.go:107] acquiring lock: {Name:mkd2c193926e7a95476bbdf7d96957c2d4298fae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:17:43.051138    4893 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/no-preload-843000/config.json: {Name:mkc0a06368051aa6ea654201f6de0ef9a73677b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:17:43.051129    4893 cache.go:107] acquiring lock: {Name:mk18a45507aefbe6505b04b99491488a724a6995 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:17:43.051131    4893 cache.go:107] acquiring lock: {Name:mk92da554926a2839c6a7ccce2df133fa4e968f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:17:43.051242    4893 cache.go:115] /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0213 15:17:43.051252    4893 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 143.5µs
	I0213 15:17:43.051258    4893 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0213 15:17:43.051263    4893 cache.go:107] acquiring lock: {Name:mk513c736760baa6eb81f8c9157af893d377521b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:17:43.051316    4893 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 15:17:43.051328    4893 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 15:17:43.051335    4893 cache.go:107] acquiring lock: {Name:mke82a3aa6b2802b44509fd7900f37f42546bd3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:17:43.051326    4893 cache.go:107] acquiring lock: {Name:mk249eacc60abd974fd923a88dee770691ca3cb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:17:43.051340    4893 cache.go:107] acquiring lock: {Name:mk319f11cf5707e8d86c309b4f9f4d7bdae32d5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:17:43.051387    4893 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0213 15:17:43.051390    4893 start.go:365] acquiring machines lock for no-preload-843000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:17:43.051400    4893 cache.go:107] acquiring lock: {Name:mkeb5b5275535893e3d0b23404383a46a92c7ca1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:17:43.051481    4893 start.go:369] acquired machines lock for "no-preload-843000" in 86µs
	I0213 15:17:43.051508    4893 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 15:17:43.051495    4893 start.go:93] Provisioning new machine with config: &{Name:no-preload-843000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-843000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:17:43.051547    4893 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:17:43.053105    4893 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0213 15:17:43.051594    4893 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0213 15:17:43.051720    4893 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0213 15:17:43.051967    4893 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 15:17:43.057841    4893 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 15:17:43.057894    4893 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0213 15:17:43.057918    4893 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0213 15:17:43.057939    4893 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 15:17:43.058402    4893 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 15:17:43.058466    4893 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0213 15:17:43.059412    4893 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 15:17:43.068562    4893 start.go:159] libmachine.API.Create for "no-preload-843000" (driver="qemu2")
	I0213 15:17:43.068583    4893 client.go:168] LocalClient.Create starting
	I0213 15:17:43.068655    4893 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:17:43.068683    4893 main.go:141] libmachine: Decoding PEM data...
	I0213 15:17:43.068700    4893 main.go:141] libmachine: Parsing certificate...
	I0213 15:17:43.068739    4893 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:17:43.068761    4893 main.go:141] libmachine: Decoding PEM data...
	I0213 15:17:43.068768    4893 main.go:141] libmachine: Parsing certificate...
	I0213 15:17:43.069105    4893 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:17:43.241990    4893 main.go:141] libmachine: Creating SSH key...
	I0213 15:17:43.506612    4893 main.go:141] libmachine: Creating Disk image...
	I0213 15:17:43.506631    4893 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:17:43.506830    4893 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/disk.qcow2
	I0213 15:17:43.519282    4893 main.go:141] libmachine: STDOUT: 
	I0213 15:17:43.519298    4893 main.go:141] libmachine: STDERR: 
	I0213 15:17:43.519348    4893 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/disk.qcow2 +20000M
	I0213 15:17:43.530593    4893 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:17:43.530616    4893 main.go:141] libmachine: STDERR: 
	I0213 15:17:43.530637    4893 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/disk.qcow2
	I0213 15:17:43.530642    4893 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:17:43.530683    4893 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:12:f4:80:ca:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/disk.qcow2
	I0213 15:17:43.532558    4893 main.go:141] libmachine: STDOUT: 
	I0213 15:17:43.532574    4893 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:17:43.532599    4893 client.go:171] LocalClient.Create took 464.021375ms
	I0213 15:17:45.141747    4893 cache.go:162] opening:  /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0213 15:17:45.316764    4893 cache.go:162] opening:  /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0213 15:17:45.341449    4893 cache.go:162] opening:  /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0213 15:17:45.353610    4893 cache.go:162] opening:  /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0
	I0213 15:17:45.376191    4893 cache.go:162] opening:  /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0213 15:17:45.379939    4893 cache.go:162] opening:  /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0213 15:17:45.386889    4893 cache.go:162] opening:  /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0213 15:17:45.533067    4893 start.go:128] duration metric: createHost completed in 2.481547959s
	I0213 15:17:45.533126    4893 start.go:83] releasing machines lock for "no-preload-843000", held for 2.481687417s
	W0213 15:17:45.533182    4893 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:17:45.550234    4893 out.go:177] * Deleting "no-preload-843000" in qemu2 ...
	W0213 15:17:45.574485    4893 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:17:45.574529    4893 start.go:709] Will try again in 5 seconds ...
	I0213 15:17:45.622651    4893 cache.go:157] /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0213 15:17:45.622673    4893 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 2.571466875s
	I0213 15:17:45.622681    4893 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0213 15:17:48.955620    4893 cache.go:157] /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0213 15:17:48.955674    4893 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 5.904495833s
	I0213 15:17:48.955733    4893 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0213 15:17:49.370416    4893 cache.go:157] /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0213 15:17:49.370467    4893 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 6.3193305s
	I0213 15:17:49.370531    4893 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0213 15:17:49.620806    4893 cache.go:157] /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0213 15:17:49.620855    4893 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 6.569736208s
	I0213 15:17:49.620885    4893 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0213 15:17:50.038613    4893 cache.go:157] /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0213 15:17:50.038671    4893 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 6.987705875s
	I0213 15:17:50.038701    4893 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0213 15:17:50.144102    4893 cache.go:157] /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0213 15:17:50.144147    4893 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 7.093178375s
	I0213 15:17:50.144190    4893 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0213 15:17:50.574734    4893 start.go:365] acquiring machines lock for no-preload-843000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:17:50.575100    4893 start.go:369] acquired machines lock for "no-preload-843000" in 296.583µs
	I0213 15:17:50.575227    4893 start.go:93] Provisioning new machine with config: &{Name:no-preload-843000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-843000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:17:50.575513    4893 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:17:50.585040    4893 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0213 15:17:50.635956    4893 start.go:159] libmachine.API.Create for "no-preload-843000" (driver="qemu2")
	I0213 15:17:50.636005    4893 client.go:168] LocalClient.Create starting
	I0213 15:17:50.636122    4893 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:17:50.636183    4893 main.go:141] libmachine: Decoding PEM data...
	I0213 15:17:50.636199    4893 main.go:141] libmachine: Parsing certificate...
	I0213 15:17:50.636283    4893 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:17:50.636323    4893 main.go:141] libmachine: Decoding PEM data...
	I0213 15:17:50.636340    4893 main.go:141] libmachine: Parsing certificate...
	I0213 15:17:50.636869    4893 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:17:50.769538    4893 main.go:141] libmachine: Creating SSH key...
	I0213 15:17:50.829334    4893 main.go:141] libmachine: Creating Disk image...
	I0213 15:17:50.829340    4893 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:17:50.829533    4893 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/disk.qcow2
	I0213 15:17:50.842165    4893 main.go:141] libmachine: STDOUT: 
	I0213 15:17:50.842189    4893 main.go:141] libmachine: STDERR: 
	I0213 15:17:50.842245    4893 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/disk.qcow2 +20000M
	I0213 15:17:50.853397    4893 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:17:50.853418    4893 main.go:141] libmachine: STDERR: 
	I0213 15:17:50.853439    4893 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/disk.qcow2
	I0213 15:17:50.853446    4893 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:17:50.853489    4893 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:5e:b1:c1:d5:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/disk.qcow2
	I0213 15:17:50.855365    4893 main.go:141] libmachine: STDOUT: 
	I0213 15:17:50.855383    4893 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:17:50.855397    4893 client.go:171] LocalClient.Create took 219.390125ms
	I0213 15:17:52.855609    4893 start.go:128] duration metric: createHost completed in 2.280114834s
	I0213 15:17:52.855672    4893 start.go:83] releasing machines lock for "no-preload-843000", held for 2.280598458s
	W0213 15:17:52.855999    4893 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-843000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-843000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:17:52.880559    4893 out.go:177] 
	W0213 15:17:52.884680    4893 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:17:52.884717    4893 out.go:239] * 
	* 
	W0213 15:17:52.886719    4893 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:17:52.896579    4893 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-843000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000: exit status 7 (50.019417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-843000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (11.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-876000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-876000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (11.376357542s)

                                                
                                                
-- stdout --
	* [embed-certs-876000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-876000 in cluster embed-certs-876000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-876000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:17:43.922102    4936 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:17:43.922222    4936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:17:43.922225    4936 out.go:304] Setting ErrFile to fd 2...
	I0213 15:17:43.922228    4936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:17:43.922348    4936 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:17:43.923370    4936 out.go:298] Setting JSON to false
	I0213 15:17:43.939594    4936 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2685,"bootTime":1707863578,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:17:43.939697    4936 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:17:43.942609    4936 out.go:177] * [embed-certs-876000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:17:43.947416    4936 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:17:43.951393    4936 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:17:43.947463    4936 notify.go:220] Checking for updates...
	I0213 15:17:43.957389    4936 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:17:43.960417    4936 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:17:43.961430    4936 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:17:43.964363    4936 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:17:43.967829    4936 config.go:182] Loaded profile config "multinode-078000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:17:43.967899    4936 config.go:182] Loaded profile config "no-preload-843000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0213 15:17:43.967959    4936 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:17:43.971197    4936 out.go:177] * Using the qemu2 driver based on user configuration
	I0213 15:17:43.977404    4936 start.go:298] selected driver: qemu2
	I0213 15:17:43.977410    4936 start.go:902] validating driver "qemu2" against <nil>
	I0213 15:17:43.977416    4936 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:17:43.979618    4936 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:17:43.982416    4936 out.go:177] * Automatically selected the socket_vmnet network
	I0213 15:17:43.985484    4936 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:17:43.985534    4936 cni.go:84] Creating CNI manager for ""
	I0213 15:17:43.985548    4936 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:17:43.985552    4936 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 15:17:43.985558    4936 start_flags.go:321] config:
	{Name:embed-certs-876000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-876000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs:}
	I0213 15:17:43.990151    4936 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:17:43.996361    4936 out.go:177] * Starting control plane node embed-certs-876000 in cluster embed-certs-876000
	I0213 15:17:44.000392    4936 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:17:44.000411    4936 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 15:17:44.000425    4936 cache.go:56] Caching tarball of preloaded images
	I0213 15:17:44.000483    4936 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:17:44.000488    4936 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 15:17:44.000574    4936 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/embed-certs-876000/config.json ...
	I0213 15:17:44.000588    4936 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/embed-certs-876000/config.json: {Name:mke107d5339f666fccfe648b2ea05f45420d9219 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:17:44.000905    4936 start.go:365] acquiring machines lock for embed-certs-876000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:17:45.533278    4936 start.go:369] acquired machines lock for "embed-certs-876000" in 1.532381459s
	I0213 15:17:45.533453    4936 start.go:93] Provisioning new machine with config: &{Name:embed-certs-876000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-876000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:17:45.533705    4936 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:17:45.541348    4936 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0213 15:17:45.593382    4936 start.go:159] libmachine.API.Create for "embed-certs-876000" (driver="qemu2")
	I0213 15:17:45.593432    4936 client.go:168] LocalClient.Create starting
	I0213 15:17:45.593547    4936 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:17:45.593602    4936 main.go:141] libmachine: Decoding PEM data...
	I0213 15:17:45.593620    4936 main.go:141] libmachine: Parsing certificate...
	I0213 15:17:45.593691    4936 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:17:45.593733    4936 main.go:141] libmachine: Decoding PEM data...
	I0213 15:17:45.593744    4936 main.go:141] libmachine: Parsing certificate...
	I0213 15:17:45.594329    4936 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:17:45.726302    4936 main.go:141] libmachine: Creating SSH key...
	I0213 15:17:45.791395    4936 main.go:141] libmachine: Creating Disk image...
	I0213 15:17:45.791401    4936 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:17:45.791603    4936 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/disk.qcow2
	I0213 15:17:45.804367    4936 main.go:141] libmachine: STDOUT: 
	I0213 15:17:45.804398    4936 main.go:141] libmachine: STDERR: 
	I0213 15:17:45.804451    4936 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/disk.qcow2 +20000M
	I0213 15:17:45.815479    4936 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:17:45.815495    4936 main.go:141] libmachine: STDERR: 
	I0213 15:17:45.815513    4936 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/disk.qcow2
	I0213 15:17:45.815527    4936 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:17:45.815567    4936 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:30:49:0b:7c:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/disk.qcow2
	I0213 15:17:45.817367    4936 main.go:141] libmachine: STDOUT: 
	I0213 15:17:45.817386    4936 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:17:45.817407    4936 client.go:171] LocalClient.Create took 223.973959ms
	I0213 15:17:47.819544    4936 start.go:128] duration metric: createHost completed in 2.285854458s
	I0213 15:17:47.819630    4936 start.go:83] releasing machines lock for "embed-certs-876000", held for 2.286289708s
	W0213 15:17:47.819712    4936 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:17:47.829202    4936 out.go:177] * Deleting "embed-certs-876000" in qemu2 ...
	W0213 15:17:47.854847    4936 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:17:47.854883    4936 start.go:709] Will try again in 5 seconds ...
	I0213 15:17:52.855696    4936 start.go:365] acquiring machines lock for embed-certs-876000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:17:52.856191    4936 start.go:369] acquired machines lock for "embed-certs-876000" in 376.584µs
	I0213 15:17:52.856313    4936 start.go:93] Provisioning new machine with config: &{Name:embed-certs-876000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-876000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:17:52.856497    4936 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:17:52.870571    4936 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0213 15:17:52.921045    4936 start.go:159] libmachine.API.Create for "embed-certs-876000" (driver="qemu2")
	I0213 15:17:52.921087    4936 client.go:168] LocalClient.Create starting
	I0213 15:17:52.921231    4936 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:17:52.921294    4936 main.go:141] libmachine: Decoding PEM data...
	I0213 15:17:52.921312    4936 main.go:141] libmachine: Parsing certificate...
	I0213 15:17:52.921380    4936 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:17:52.921427    4936 main.go:141] libmachine: Decoding PEM data...
	I0213 15:17:52.921442    4936 main.go:141] libmachine: Parsing certificate...
	I0213 15:17:52.921945    4936 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:17:53.061606    4936 main.go:141] libmachine: Creating SSH key...
	I0213 15:17:53.184191    4936 main.go:141] libmachine: Creating Disk image...
	I0213 15:17:53.184200    4936 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:17:53.184431    4936 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/disk.qcow2
	I0213 15:17:53.197705    4936 main.go:141] libmachine: STDOUT: 
	I0213 15:17:53.197747    4936 main.go:141] libmachine: STDERR: 
	I0213 15:17:53.197817    4936 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/disk.qcow2 +20000M
	I0213 15:17:53.211068    4936 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:17:53.211107    4936 main.go:141] libmachine: STDERR: 
	I0213 15:17:53.211118    4936 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/disk.qcow2
	I0213 15:17:53.211122    4936 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:17:53.211168    4936 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:bb:99:49:38:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/disk.qcow2
	I0213 15:17:53.213197    4936 main.go:141] libmachine: STDOUT: 
	I0213 15:17:53.213229    4936 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:17:53.213242    4936 client.go:171] LocalClient.Create took 292.156042ms
	I0213 15:17:55.213744    4936 start.go:128] duration metric: createHost completed in 2.357216834s
	I0213 15:17:55.213834    4936 start.go:83] releasing machines lock for "embed-certs-876000", held for 2.357668959s
	W0213 15:17:55.214272    4936 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-876000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-876000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:17:55.221010    4936 out.go:177] 
	W0213 15:17:55.232082    4936 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:17:55.232121    4936 out.go:239] * 
	* 
	W0213 15:17:55.234584    4936 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:17:55.248095    4936 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-876000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-876000 -n embed-certs-876000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-876000 -n embed-certs-876000: exit status 7 (67.415042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-876000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (11.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-843000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-843000 create -f testdata/busybox.yaml: exit status 1 (30.484458ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-843000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-843000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000: exit status 7 (36.590625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-843000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000: exit status 7 (35.567875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-843000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-843000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-843000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-843000 describe deploy/metrics-server -n kube-system: exit status 1 (27.825583ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-843000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-843000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000: exit status 7 (32.388917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-843000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (7.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-843000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-843000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (6.952693875s)

                                                
                                                
-- stdout --
	* [no-preload-843000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-843000 in cluster no-preload-843000
	* Restarting existing qemu2 VM for "no-preload-843000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-843000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:17:53.387078    4975 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:17:53.387207    4975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:17:53.387210    4975 out.go:304] Setting ErrFile to fd 2...
	I0213 15:17:53.387213    4975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:17:53.387340    4975 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:17:53.388365    4975 out.go:298] Setting JSON to false
	I0213 15:17:53.404374    4975 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2695,"bootTime":1707863578,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:17:53.404462    4975 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:17:53.409625    4975 out.go:177] * [no-preload-843000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:17:53.418529    4975 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:17:53.418564    4975 notify.go:220] Checking for updates...
	I0213 15:17:53.425548    4975 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:17:53.429535    4975 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:17:53.432580    4975 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:17:53.434151    4975 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:17:53.437524    4975 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:17:53.440839    4975 config.go:182] Loaded profile config "no-preload-843000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0213 15:17:53.441119    4975 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:17:53.445377    4975 out.go:177] * Using the qemu2 driver based on existing profile
	I0213 15:17:53.452481    4975 start.go:298] selected driver: qemu2
	I0213 15:17:53.452489    4975 start.go:902] validating driver "qemu2" against &{Name:no-preload-843000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-843000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNode
Requested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:17:53.452541    4975 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:17:53.454826    4975 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:17:53.454855    4975 cni.go:84] Creating CNI manager for ""
	I0213 15:17:53.454862    4975 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:17:53.454867    4975 start_flags.go:321] config:
	{Name:no-preload-843000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-843000 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:17:53.459269    4975 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:17:53.467534    4975 out.go:177] * Starting control plane node no-preload-843000 in cluster no-preload-843000
	I0213 15:17:53.471585    4975 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0213 15:17:53.471653    4975 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/no-preload-843000/config.json ...
	I0213 15:17:53.471717    4975 cache.go:107] acquiring lock: {Name:mkd2c193926e7a95476bbdf7d96957c2d4298fae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:17:53.471723    4975 cache.go:107] acquiring lock: {Name:mk92da554926a2839c6a7ccce2df133fa4e968f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:17:53.471736    4975 cache.go:107] acquiring lock: {Name:mkeb5b5275535893e3d0b23404383a46a92c7ca1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:17:53.471764    4975 cache.go:107] acquiring lock: {Name:mke82a3aa6b2802b44509fd7900f37f42546bd3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:17:53.471808    4975 cache.go:115] /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0213 15:17:53.471823    4975 cache.go:115] /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0213 15:17:53.471830    4975 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 124.042µs
	I0213 15:17:53.471837    4975 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0213 15:17:53.471818    4975 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 122.458µs
	I0213 15:17:53.471841    4975 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0213 15:17:53.471836    4975 cache.go:107] acquiring lock: {Name:mk18a45507aefbe6505b04b99491488a724a6995 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:17:53.471848    4975 cache.go:107] acquiring lock: {Name:mk249eacc60abd974fd923a88dee770691ca3cb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:17:53.471860    4975 cache.go:107] acquiring lock: {Name:mk513c736760baa6eb81f8c9157af893d377521b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:17:53.471883    4975 cache.go:115] /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0213 15:17:53.471887    4975 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 172.625µs
	I0213 15:17:53.471891    4975 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0213 15:17:53.471903    4975 cache.go:115] /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0213 15:17:53.471909    4975 cache.go:115] /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0213 15:17:53.471907    4975 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 47.833µs
	I0213 15:17:53.471914    4975 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0213 15:17:53.471919    4975 cache.go:115] /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0213 15:17:53.471913    4975 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 66µs
	I0213 15:17:53.471925    4975 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0213 15:17:53.471923    4975 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 98.375µs
	I0213 15:17:53.471929    4975 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0213 15:17:53.471940    4975 cache.go:115] /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0213 15:17:53.471944    4975 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 180.208µs
	I0213 15:17:53.471948    4975 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0213 15:17:53.471943    4975 cache.go:107] acquiring lock: {Name:mk319f11cf5707e8d86c309b4f9f4d7bdae32d5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:17:53.472004    4975 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0213 15:17:53.472124    4975 start.go:365] acquiring machines lock for no-preload-843000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:17:53.477660    4975 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0213 15:17:55.214007    4975 start.go:369] acquired machines lock for "no-preload-843000" in 1.741881875s
	I0213 15:17:55.214168    4975 start.go:96] Skipping create...Using existing machine configuration
	I0213 15:17:55.214196    4975 fix.go:54] fixHost starting: 
	I0213 15:17:55.214940    4975 fix.go:102] recreateIfNeeded on no-preload-843000: state=Stopped err=<nil>
	W0213 15:17:55.214973    4975 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 15:17:55.229103    4975 out.go:177] * Restarting existing qemu2 VM for "no-preload-843000" ...
	I0213 15:17:55.235205    4975 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:5e:b1:c1:d5:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/disk.qcow2
	I0213 15:17:55.245489    4975 main.go:141] libmachine: STDOUT: 
	I0213 15:17:55.245591    4975 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:17:55.245726    4975 fix.go:56] fixHost completed within 31.531208ms
	I0213 15:17:55.245753    4975 start.go:83] releasing machines lock for "no-preload-843000", held for 31.676125ms
	W0213 15:17:55.245802    4975 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:17:55.246073    4975 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:17:55.246097    4975 start.go:709] Will try again in 5 seconds ...
	I0213 15:17:55.656070    4975 cache.go:162] opening:  /Users/jenkins/minikube-integration/18170-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0
	I0213 15:18:00.246281    4975 start.go:365] acquiring machines lock for no-preload-843000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:18:00.246630    4975 start.go:369] acquired machines lock for "no-preload-843000" in 272.375µs
	I0213 15:18:00.246772    4975 start.go:96] Skipping create...Using existing machine configuration
	I0213 15:18:00.246793    4975 fix.go:54] fixHost starting: 
	I0213 15:18:00.247518    4975 fix.go:102] recreateIfNeeded on no-preload-843000: state=Stopped err=<nil>
	W0213 15:18:00.247547    4975 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 15:18:00.254122    4975 out.go:177] * Restarting existing qemu2 VM for "no-preload-843000" ...
	I0213 15:18:00.259298    4975 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:5e:b1:c1:d5:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/no-preload-843000/disk.qcow2
	I0213 15:18:00.270052    4975 main.go:141] libmachine: STDOUT: 
	I0213 15:18:00.270126    4975 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:18:00.270217    4975 fix.go:56] fixHost completed within 23.424666ms
	I0213 15:18:00.270237    4975 start.go:83] releasing machines lock for "no-preload-843000", held for 23.583ms
	W0213 15:18:00.270421    4975 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-843000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-843000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:18:00.280120    4975 out.go:177] 
	W0213 15:18:00.283107    4975 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:18:00.283153    4975 out.go:239] * 
	* 
	W0213 15:18:00.286318    4975 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:18:00.295018    4975 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-843000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000: exit status 7 (66.733125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-843000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (7.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-876000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-876000 create -f testdata/busybox.yaml: exit status 1 (29.351959ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-876000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-876000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-876000 -n embed-certs-876000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-876000 -n embed-certs-876000: exit status 7 (31.22925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-876000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-876000 -n embed-certs-876000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-876000 -n embed-certs-876000: exit status 7 (31.116875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-876000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-876000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-876000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-876000 describe deploy/metrics-server -n kube-system: exit status 1 (27.091209ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-876000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-876000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-876000 -n embed-certs-876000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-876000 -n embed-certs-876000: exit status 7 (30.609042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-876000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-876000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-876000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (5.168303125s)

                                                
                                                
-- stdout --
	* [embed-certs-876000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-876000 in cluster embed-certs-876000
	* Restarting existing qemu2 VM for "embed-certs-876000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-876000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:17:55.733554    5004 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:17:55.733690    5004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:17:55.733692    5004 out.go:304] Setting ErrFile to fd 2...
	I0213 15:17:55.733695    5004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:17:55.733821    5004 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:17:55.734765    5004 out.go:298] Setting JSON to false
	I0213 15:17:55.750737    5004 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2697,"bootTime":1707863578,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:17:55.750801    5004 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:17:55.755600    5004 out.go:177] * [embed-certs-876000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:17:55.762615    5004 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:17:55.762700    5004 notify.go:220] Checking for updates...
	I0213 15:17:55.766605    5004 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:17:55.769644    5004 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:17:55.776607    5004 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:17:55.780722    5004 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:17:55.783654    5004 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:17:55.786824    5004 config.go:182] Loaded profile config "embed-certs-876000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:17:55.787090    5004 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:17:55.791601    5004 out.go:177] * Using the qemu2 driver based on existing profile
	I0213 15:17:55.798526    5004 start.go:298] selected driver: qemu2
	I0213 15:17:55.798531    5004 start.go:902] validating driver "qemu2" against &{Name:embed-certs-876000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-876000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:17:55.798604    5004 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:17:55.800854    5004 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:17:55.800904    5004 cni.go:84] Creating CNI manager for ""
	I0213 15:17:55.800912    5004 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:17:55.800916    5004 start_flags.go:321] config:
	{Name:embed-certs-876000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-876000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:17:55.805295    5004 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:17:55.813503    5004 out.go:177] * Starting control plane node embed-certs-876000 in cluster embed-certs-876000
	I0213 15:17:55.817617    5004 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:17:55.817632    5004 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 15:17:55.817644    5004 cache.go:56] Caching tarball of preloaded images
	I0213 15:17:55.817710    5004 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:17:55.817716    5004 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 15:17:55.817792    5004 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/embed-certs-876000/config.json ...
	I0213 15:17:55.818327    5004 start.go:365] acquiring machines lock for embed-certs-876000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:17:55.818355    5004 start.go:369] acquired machines lock for "embed-certs-876000" in 21.875µs
	I0213 15:17:55.818368    5004 start.go:96] Skipping create...Using existing machine configuration
	I0213 15:17:55.818374    5004 fix.go:54] fixHost starting: 
	I0213 15:17:55.818496    5004 fix.go:102] recreateIfNeeded on embed-certs-876000: state=Stopped err=<nil>
	W0213 15:17:55.818507    5004 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 15:17:55.822445    5004 out.go:177] * Restarting existing qemu2 VM for "embed-certs-876000" ...
	I0213 15:17:55.830637    5004 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:bb:99:49:38:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/disk.qcow2
	I0213 15:17:55.832834    5004 main.go:141] libmachine: STDOUT: 
	I0213 15:17:55.832857    5004 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:17:55.832899    5004 fix.go:56] fixHost completed within 14.525125ms
	I0213 15:17:55.832905    5004 start.go:83] releasing machines lock for "embed-certs-876000", held for 14.546291ms
	W0213 15:17:55.832912    5004 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:17:55.832957    5004 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:17:55.832962    5004 start.go:709] Will try again in 5 seconds ...
	I0213 15:18:00.834066    5004 start.go:365] acquiring machines lock for embed-certs-876000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:18:00.834147    5004 start.go:369] acquired machines lock for "embed-certs-876000" in 59.167µs
	I0213 15:18:00.834165    5004 start.go:96] Skipping create...Using existing machine configuration
	I0213 15:18:00.834170    5004 fix.go:54] fixHost starting: 
	I0213 15:18:00.834306    5004 fix.go:102] recreateIfNeeded on embed-certs-876000: state=Stopped err=<nil>
	W0213 15:18:00.834311    5004 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 15:18:00.837873    5004 out.go:177] * Restarting existing qemu2 VM for "embed-certs-876000" ...
	I0213 15:18:00.840921    5004 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:bb:99:49:38:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/embed-certs-876000/disk.qcow2
	I0213 15:18:00.843012    5004 main.go:141] libmachine: STDOUT: 
	I0213 15:18:00.843030    5004 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:18:00.843050    5004 fix.go:56] fixHost completed within 8.88ms
	I0213 15:18:00.843054    5004 start.go:83] releasing machines lock for "embed-certs-876000", held for 8.902334ms
	W0213 15:18:00.843097    5004 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-876000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-876000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:18:00.849983    5004 out.go:177] 
	W0213 15:18:00.853911    5004 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:18:00.853916    5004 out.go:239] * 
	* 
	W0213 15:18:00.854369    5004 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:18:00.864915    5004 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-876000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-876000 -n embed-certs-876000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-876000 -n embed-certs-876000: exit status 7 (33.307583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-876000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-843000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000: exit status 7 (32.98ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-843000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-843000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-843000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-843000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.634625ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-843000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-843000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000: exit status 7 (30.85275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-843000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-843000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000: exit status 7 (30.173208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-843000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-843000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-843000 --alsologtostderr -v=1: exit status 89 (43.2675ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-843000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:18:00.571803    5026 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:18:00.571952    5026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:18:00.571955    5026 out.go:304] Setting ErrFile to fd 2...
	I0213 15:18:00.571957    5026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:18:00.572075    5026 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:18:00.572293    5026 out.go:298] Setting JSON to false
	I0213 15:18:00.572302    5026 mustload.go:65] Loading cluster: no-preload-843000
	I0213 15:18:00.572499    5026 config.go:182] Loaded profile config "no-preload-843000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0213 15:18:00.576882    5026 out.go:177] * The control plane node must be running for this command
	I0213 15:18:00.581063    5026 out.go:177]   To start a cluster, run: "minikube start -p no-preload-843000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-843000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000: exit status 7 (30.779333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-843000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000: exit status 7 (30.9995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-843000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-876000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-876000 -n embed-certs-876000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-876000 -n embed-certs-876000: exit status 7 (33.009709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-876000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-876000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-876000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-876000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.834ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-876000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-876000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-876000 -n embed-certs-876000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-876000 -n embed-certs-876000: exit status 7 (32.79475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-876000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-876000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-876000 -n embed-certs-876000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-876000 -n embed-certs-876000: exit status 7 (34.350667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-876000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-876000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-876000 --alsologtostderr -v=1: exit status 89 (47.825834ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-876000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:18:01.107925    5063 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:18:01.108096    5063 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:18:01.108099    5063 out.go:304] Setting ErrFile to fd 2...
	I0213 15:18:01.108102    5063 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:18:01.108272    5063 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:18:01.108517    5063 out.go:298] Setting JSON to false
	I0213 15:18:01.108527    5063 mustload.go:65] Loading cluster: embed-certs-876000
	I0213 15:18:01.108745    5063 config.go:182] Loaded profile config "embed-certs-876000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:18:01.112713    5063 out.go:177] * The control plane node must be running for this command
	I0213 15:18:01.119889    5063 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-876000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-876000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-876000 -n embed-certs-876000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-876000 -n embed-certs-876000: exit status 7 (32.150416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-876000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-876000 -n embed-certs-876000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-876000 -n embed-certs-876000: exit status 7 (32.300541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-876000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-066000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-066000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (9.73314375s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-066000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-066000 in cluster default-k8s-diff-port-066000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-066000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:18:01.337583    5084 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:18:01.337715    5084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:18:01.337718    5084 out.go:304] Setting ErrFile to fd 2...
	I0213 15:18:01.337720    5084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:18:01.337851    5084 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:18:01.339189    5084 out.go:298] Setting JSON to false
	I0213 15:18:01.358749    5084 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2703,"bootTime":1707863578,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:18:01.358817    5084 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:18:01.362839    5084 out.go:177] * [default-k8s-diff-port-066000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:18:01.371964    5084 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:18:01.375894    5084 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:18:01.372009    5084 notify.go:220] Checking for updates...
	I0213 15:18:01.381918    5084 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:18:01.384798    5084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:18:01.391884    5084 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:18:01.399835    5084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:18:01.404235    5084 config.go:182] Loaded profile config "multinode-078000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:18:01.404287    5084 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:18:01.415847    5084 out.go:177] * Using the qemu2 driver based on user configuration
	I0213 15:18:01.423712    5084 start.go:298] selected driver: qemu2
	I0213 15:18:01.423718    5084 start.go:902] validating driver "qemu2" against <nil>
	I0213 15:18:01.423724    5084 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:18:01.426102    5084 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 15:18:01.428835    5084 out.go:177] * Automatically selected the socket_vmnet network
	I0213 15:18:01.431938    5084 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:18:01.431981    5084 cni.go:84] Creating CNI manager for ""
	I0213 15:18:01.431990    5084 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:18:01.431995    5084 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 15:18:01.432000    5084 start_flags.go:321] config:
	{Name:default-k8s-diff-port-066000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-066000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:18:01.436849    5084 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:18:01.443811    5084 out.go:177] * Starting control plane node default-k8s-diff-port-066000 in cluster default-k8s-diff-port-066000
	I0213 15:18:01.447890    5084 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:18:01.447920    5084 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 15:18:01.447929    5084 cache.go:56] Caching tarball of preloaded images
	I0213 15:18:01.448010    5084 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:18:01.448016    5084 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 15:18:01.448090    5084 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/default-k8s-diff-port-066000/config.json ...
	I0213 15:18:01.448101    5084 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/default-k8s-diff-port-066000/config.json: {Name:mk2173f737e462062093b2af061a49719368ea68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:18:01.448401    5084 start.go:365] acquiring machines lock for default-k8s-diff-port-066000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:18:01.448440    5084 start.go:369] acquired machines lock for "default-k8s-diff-port-066000" in 30.291µs
	I0213 15:18:01.448452    5084 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-066000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-066000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:18:01.448495    5084 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:18:01.451896    5084 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0213 15:18:01.467751    5084 start.go:159] libmachine.API.Create for "default-k8s-diff-port-066000" (driver="qemu2")
	I0213 15:18:01.467788    5084 client.go:168] LocalClient.Create starting
	I0213 15:18:01.467858    5084 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:18:01.467892    5084 main.go:141] libmachine: Decoding PEM data...
	I0213 15:18:01.467903    5084 main.go:141] libmachine: Parsing certificate...
	I0213 15:18:01.467944    5084 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:18:01.467966    5084 main.go:141] libmachine: Decoding PEM data...
	I0213 15:18:01.467973    5084 main.go:141] libmachine: Parsing certificate...
	I0213 15:18:01.468322    5084 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:18:01.597226    5084 main.go:141] libmachine: Creating SSH key...
	I0213 15:18:01.657379    5084 main.go:141] libmachine: Creating Disk image...
	I0213 15:18:01.657388    5084 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:18:01.657564    5084 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/disk.qcow2
	I0213 15:18:01.671417    5084 main.go:141] libmachine: STDOUT: 
	I0213 15:18:01.671442    5084 main.go:141] libmachine: STDERR: 
	I0213 15:18:01.671523    5084 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/disk.qcow2 +20000M
	I0213 15:18:01.683605    5084 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:18:01.683630    5084 main.go:141] libmachine: STDERR: 
	I0213 15:18:01.683645    5084 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/disk.qcow2
	I0213 15:18:01.683650    5084 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:18:01.683682    5084 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:8a:48:ff:b9:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/disk.qcow2
	I0213 15:18:01.685409    5084 main.go:141] libmachine: STDOUT: 
	I0213 15:18:01.685428    5084 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:18:01.685449    5084 client.go:171] LocalClient.Create took 217.661334ms
	I0213 15:18:03.687617    5084 start.go:128] duration metric: createHost completed in 2.239147625s
	I0213 15:18:03.687685    5084 start.go:83] releasing machines lock for "default-k8s-diff-port-066000", held for 2.239282083s
	W0213 15:18:03.687732    5084 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:18:03.704677    5084 out.go:177] * Deleting "default-k8s-diff-port-066000" in qemu2 ...
	W0213 15:18:03.723891    5084 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:18:03.723927    5084 start.go:709] Will try again in 5 seconds ...
	I0213 15:18:08.725980    5084 start.go:365] acquiring machines lock for default-k8s-diff-port-066000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:18:08.726388    5084 start.go:369] acquired machines lock for "default-k8s-diff-port-066000" in 327.75µs
	I0213 15:18:08.726543    5084 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-066000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-066000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:18:08.726752    5084 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:18:08.731637    5084 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0213 15:18:08.783202    5084 start.go:159] libmachine.API.Create for "default-k8s-diff-port-066000" (driver="qemu2")
	I0213 15:18:08.783254    5084 client.go:168] LocalClient.Create starting
	I0213 15:18:08.783368    5084 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:18:08.783422    5084 main.go:141] libmachine: Decoding PEM data...
	I0213 15:18:08.783441    5084 main.go:141] libmachine: Parsing certificate...
	I0213 15:18:08.783499    5084 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:18:08.783542    5084 main.go:141] libmachine: Decoding PEM data...
	I0213 15:18:08.783554    5084 main.go:141] libmachine: Parsing certificate...
	I0213 15:18:08.784058    5084 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:18:08.919767    5084 main.go:141] libmachine: Creating SSH key...
	I0213 15:18:08.959252    5084 main.go:141] libmachine: Creating Disk image...
	I0213 15:18:08.959257    5084 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:18:08.959458    5084 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/disk.qcow2
	I0213 15:18:08.972247    5084 main.go:141] libmachine: STDOUT: 
	I0213 15:18:08.972275    5084 main.go:141] libmachine: STDERR: 
	I0213 15:18:08.972351    5084 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/disk.qcow2 +20000M
	I0213 15:18:08.983168    5084 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:18:08.983194    5084 main.go:141] libmachine: STDERR: 
	I0213 15:18:08.983209    5084 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/disk.qcow2
	I0213 15:18:08.983215    5084 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:18:08.983249    5084 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:8b:60:3b:a4:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/disk.qcow2
	I0213 15:18:08.984964    5084 main.go:141] libmachine: STDOUT: 
	I0213 15:18:08.984992    5084 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:18:08.985006    5084 client.go:171] LocalClient.Create took 201.749792ms
	I0213 15:18:10.987177    5084 start.go:128] duration metric: createHost completed in 2.260426041s
	I0213 15:18:10.987239    5084 start.go:83] releasing machines lock for "default-k8s-diff-port-066000", held for 2.260873667s
	W0213 15:18:10.987528    5084 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-066000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-066000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:18:10.999154    5084 out.go:177] 
	W0213 15:18:11.014233    5084 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:18:11.014301    5084 out.go:239] * 
	* 
	W0213 15:18:11.017037    5084 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:18:11.028092    5084 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-066000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-066000 -n default-k8s-diff-port-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-066000 -n default-k8s-diff-port-066000: exit status 7 (54.83975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (11.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-330000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-330000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (11.865694s)

                                                
                                                
-- stdout --
	* [newest-cni-330000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-330000 in cluster newest-cni-330000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-330000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:18:01.653342    5103 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:18:01.653461    5103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:18:01.653464    5103 out.go:304] Setting ErrFile to fd 2...
	I0213 15:18:01.653467    5103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:18:01.653584    5103 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:18:01.654745    5103 out.go:298] Setting JSON to false
	I0213 15:18:01.671857    5103 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2703,"bootTime":1707863578,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:18:01.671945    5103 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:18:01.675887    5103 out.go:177] * [newest-cni-330000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:18:01.690860    5103 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:18:01.687000    5103 notify.go:220] Checking for updates...
	I0213 15:18:01.698808    5103 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:18:01.701827    5103 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:18:01.704909    5103 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:18:01.707851    5103 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:18:01.709268    5103 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:18:01.713158    5103 config.go:182] Loaded profile config "default-k8s-diff-port-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:18:01.713223    5103 config.go:182] Loaded profile config "multinode-078000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:18:01.713280    5103 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:18:01.717895    5103 out.go:177] * Using the qemu2 driver based on user configuration
	I0213 15:18:01.723822    5103 start.go:298] selected driver: qemu2
	I0213 15:18:01.723827    5103 start.go:902] validating driver "qemu2" against <nil>
	I0213 15:18:01.723831    5103 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:18:01.725879    5103 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W0213 15:18:01.725898    5103 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0213 15:18:01.733688    5103 out.go:177] * Automatically selected the socket_vmnet network
	I0213 15:18:01.736999    5103 start_flags.go:946] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0213 15:18:01.737031    5103 cni.go:84] Creating CNI manager for ""
	I0213 15:18:01.737037    5103 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:18:01.737041    5103 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 15:18:01.737047    5103 start_flags.go:321] config:
	{Name:newest-cni-330000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:18:01.741575    5103 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:18:01.748829    5103 out.go:177] * Starting control plane node newest-cni-330000 in cluster newest-cni-330000
	I0213 15:18:01.752786    5103 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0213 15:18:01.752805    5103 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0213 15:18:01.752814    5103 cache.go:56] Caching tarball of preloaded images
	I0213 15:18:01.752899    5103 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:18:01.752910    5103 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0213 15:18:01.752977    5103 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/newest-cni-330000/config.json ...
	I0213 15:18:01.752989    5103 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/newest-cni-330000/config.json: {Name:mke2884aaccdc157771a917aef57ef8e6dc5ee0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 15:18:01.753229    5103 start.go:365] acquiring machines lock for newest-cni-330000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:18:03.687860    5103 start.go:369] acquired machines lock for "newest-cni-330000" in 1.9346355s
	I0213 15:18:03.687968    5103 start.go:93] Provisioning new machine with config: &{Name:newest-cni-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:18:03.688239    5103 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:18:03.696751    5103 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0213 15:18:03.747665    5103 start.go:159] libmachine.API.Create for "newest-cni-330000" (driver="qemu2")
	I0213 15:18:03.747721    5103 client.go:168] LocalClient.Create starting
	I0213 15:18:03.747878    5103 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:18:03.747937    5103 main.go:141] libmachine: Decoding PEM data...
	I0213 15:18:03.747956    5103 main.go:141] libmachine: Parsing certificate...
	I0213 15:18:03.748017    5103 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:18:03.748059    5103 main.go:141] libmachine: Decoding PEM data...
	I0213 15:18:03.748074    5103 main.go:141] libmachine: Parsing certificate...
	I0213 15:18:03.748769    5103 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:18:03.882138    5103 main.go:141] libmachine: Creating SSH key...
	I0213 15:18:03.928810    5103 main.go:141] libmachine: Creating Disk image...
	I0213 15:18:03.928816    5103 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:18:03.929015    5103 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/disk.qcow2
	I0213 15:18:03.941338    5103 main.go:141] libmachine: STDOUT: 
	I0213 15:18:03.941370    5103 main.go:141] libmachine: STDERR: 
	I0213 15:18:03.941440    5103 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/disk.qcow2 +20000M
	I0213 15:18:03.952296    5103 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:18:03.952321    5103 main.go:141] libmachine: STDERR: 
	I0213 15:18:03.952334    5103 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/disk.qcow2
	I0213 15:18:03.952338    5103 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:18:03.952371    5103 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:42:90:e3:06:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/disk.qcow2
	I0213 15:18:03.954191    5103 main.go:141] libmachine: STDOUT: 
	I0213 15:18:03.954208    5103 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:18:03.954230    5103 client.go:171] LocalClient.Create took 206.507584ms
	I0213 15:18:05.956365    5103 start.go:128] duration metric: createHost completed in 2.268142083s
	I0213 15:18:05.956430    5103 start.go:83] releasing machines lock for "newest-cni-330000", held for 2.268573875s
	W0213 15:18:05.956517    5103 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:18:05.968653    5103 out.go:177] * Deleting "newest-cni-330000" in qemu2 ...
	W0213 15:18:05.994020    5103 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:18:05.994051    5103 start.go:709] Will try again in 5 seconds ...
	I0213 15:18:10.996221    5103 start.go:365] acquiring machines lock for newest-cni-330000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:18:10.996572    5103 start.go:369] acquired machines lock for "newest-cni-330000" in 273.084µs
	I0213 15:18:10.996726    5103 start.go:93] Provisioning new machine with config: &{Name:newest-cni-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0213 15:18:10.997011    5103 start.go:125] createHost starting for "" (driver="qemu2")
	I0213 15:18:11.010071    5103 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0213 15:18:11.059521    5103 start.go:159] libmachine.API.Create for "newest-cni-330000" (driver="qemu2")
	I0213 15:18:11.059567    5103 client.go:168] LocalClient.Create starting
	I0213 15:18:11.059702    5103 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/ca.pem
	I0213 15:18:11.059750    5103 main.go:141] libmachine: Decoding PEM data...
	I0213 15:18:11.059765    5103 main.go:141] libmachine: Parsing certificate...
	I0213 15:18:11.059829    5103 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18170-979/.minikube/certs/cert.pem
	I0213 15:18:11.059856    5103 main.go:141] libmachine: Decoding PEM data...
	I0213 15:18:11.059867    5103 main.go:141] libmachine: Parsing certificate...
	I0213 15:18:11.060342    5103 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18170-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso...
	I0213 15:18:11.198457    5103 main.go:141] libmachine: Creating SSH key...
	I0213 15:18:11.403995    5103 main.go:141] libmachine: Creating Disk image...
	I0213 15:18:11.404006    5103 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0213 15:18:11.404276    5103 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/disk.qcow2.raw /Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/disk.qcow2
	I0213 15:18:11.418830    5103 main.go:141] libmachine: STDOUT: 
	I0213 15:18:11.418867    5103 main.go:141] libmachine: STDERR: 
	I0213 15:18:11.418988    5103 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/disk.qcow2 +20000M
	I0213 15:18:11.431698    5103 main.go:141] libmachine: STDOUT: Image resized.
	
	I0213 15:18:11.431727    5103 main.go:141] libmachine: STDERR: 
	I0213 15:18:11.431747    5103 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/disk.qcow2
	I0213 15:18:11.431753    5103 main.go:141] libmachine: Starting QEMU VM...
	I0213 15:18:11.431804    5103 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:7c:d1:eb:cd:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/disk.qcow2
	I0213 15:18:11.433776    5103 main.go:141] libmachine: STDOUT: 
	I0213 15:18:11.433819    5103 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:18:11.433834    5103 client.go:171] LocalClient.Create took 374.232792ms
	I0213 15:18:13.436119    5103 start.go:128] duration metric: createHost completed in 2.43906525s
	I0213 15:18:13.436230    5103 start.go:83] releasing machines lock for "newest-cni-330000", held for 2.439681292s
	W0213 15:18:13.436568    5103 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-330000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-330000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:18:13.445239    5103 out.go:177] 
	W0213 15:18:13.457215    5103 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:18:13.457263    5103 out.go:239] * 
	* 
	W0213 15:18:13.460020    5103 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:18:13.471247    5103 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-330000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-330000 -n newest-cni-330000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-330000 -n newest-cni-330000: exit status 7 (62.451125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-330000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (11.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-066000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-066000 create -f testdata/busybox.yaml: exit status 1 (31.438209ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-066000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-066000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-066000 -n default-k8s-diff-port-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-066000 -n default-k8s-diff-port-066000: exit status 7 (35.821625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-066000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-066000 -n default-k8s-diff-port-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-066000 -n default-k8s-diff-port-066000: exit status 7 (35.933666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-066000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-066000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-066000 describe deploy/metrics-server -n kube-system: exit status 1 (27.912542ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-066000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-066000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-066000 -n default-k8s-diff-port-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-066000 -n default-k8s-diff-port-066000: exit status 7 (34.91625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-066000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-066000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (7.030046167s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-066000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-066000 in cluster default-k8s-diff-port-066000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-066000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-066000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:18:11.530217    5145 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:18:11.530337    5145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:18:11.530341    5145 out.go:304] Setting ErrFile to fd 2...
	I0213 15:18:11.530343    5145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:18:11.530481    5145 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:18:11.531441    5145 out.go:298] Setting JSON to false
	I0213 15:18:11.547388    5145 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2713,"bootTime":1707863578,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:18:11.547487    5145 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:18:11.551241    5145 out.go:177] * [default-k8s-diff-port-066000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:18:11.558071    5145 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:18:11.562131    5145 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:18:11.558084    5145 notify.go:220] Checking for updates...
	I0213 15:18:11.566034    5145 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:18:11.569125    5145 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:18:11.572145    5145 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:18:11.575122    5145 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:18:11.578334    5145 config.go:182] Loaded profile config "default-k8s-diff-port-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:18:11.578587    5145 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:18:11.583072    5145 out.go:177] * Using the qemu2 driver based on existing profile
	I0213 15:18:11.590106    5145 start.go:298] selected driver: qemu2
	I0213 15:18:11.590111    5145 start.go:902] validating driver "qemu2" against &{Name:default-k8s-diff-port-066000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-066000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:18:11.590175    5145 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:18:11.592529    5145 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 15:18:11.592583    5145 cni.go:84] Creating CNI manager for ""
	I0213 15:18:11.592592    5145 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:18:11.592599    5145 start_flags.go:321] config:
	{Name:default-k8s-diff-port-066000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-0660
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:18:11.596942    5145 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:18:11.605114    5145 out.go:177] * Starting control plane node default-k8s-diff-port-066000 in cluster default-k8s-diff-port-066000
	I0213 15:18:11.606480    5145 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 15:18:11.606494    5145 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 15:18:11.606504    5145 cache.go:56] Caching tarball of preloaded images
	I0213 15:18:11.606580    5145 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:18:11.606587    5145 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 15:18:11.606662    5145 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/default-k8s-diff-port-066000/config.json ...
	I0213 15:18:11.607028    5145 start.go:365] acquiring machines lock for default-k8s-diff-port-066000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:18:13.436377    5145 start.go:369] acquired machines lock for "default-k8s-diff-port-066000" in 1.829333084s
	I0213 15:18:13.436576    5145 start.go:96] Skipping create...Using existing machine configuration
	I0213 15:18:13.436610    5145 fix.go:54] fixHost starting: 
	I0213 15:18:13.437281    5145 fix.go:102] recreateIfNeeded on default-k8s-diff-port-066000: state=Stopped err=<nil>
	W0213 15:18:13.437327    5145 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 15:18:13.453264    5145 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-066000" ...
	I0213 15:18:13.460455    5145 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:8b:60:3b:a4:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/disk.qcow2
	I0213 15:18:13.470186    5145 main.go:141] libmachine: STDOUT: 
	I0213 15:18:13.470265    5145 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:18:13.470394    5145 fix.go:56] fixHost completed within 33.789459ms
	I0213 15:18:13.470415    5145 start.go:83] releasing machines lock for "default-k8s-diff-port-066000", held for 33.978333ms
	W0213 15:18:13.470449    5145 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:18:13.470637    5145 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:18:13.470653    5145 start.go:709] Will try again in 5 seconds ...
	I0213 15:18:18.472800    5145 start.go:365] acquiring machines lock for default-k8s-diff-port-066000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:18:18.473323    5145 start.go:369] acquired machines lock for "default-k8s-diff-port-066000" in 394.625µs
	I0213 15:18:18.473462    5145 start.go:96] Skipping create...Using existing machine configuration
	I0213 15:18:18.473481    5145 fix.go:54] fixHost starting: 
	I0213 15:18:18.474148    5145 fix.go:102] recreateIfNeeded on default-k8s-diff-port-066000: state=Stopped err=<nil>
	W0213 15:18:18.474179    5145 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 15:18:18.483547    5145 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-066000" ...
	I0213 15:18:18.485460    5145 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:8b:60:3b:a4:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/default-k8s-diff-port-066000/disk.qcow2
	I0213 15:18:18.495042    5145 main.go:141] libmachine: STDOUT: 
	I0213 15:18:18.495148    5145 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:18:18.495223    5145 fix.go:56] fixHost completed within 21.738459ms
	I0213 15:18:18.495244    5145 start.go:83] releasing machines lock for "default-k8s-diff-port-066000", held for 21.897625ms
	W0213 15:18:18.495485    5145 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-066000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-066000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:18:18.502545    5145 out.go:177] 
	W0213 15:18:18.505561    5145 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:18:18.505601    5145 out.go:239] * 
	* 
	W0213 15:18:18.508119    5145 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:18:18.516485    5145 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-066000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-066000 -n default-k8s-diff-port-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-066000 -n default-k8s-diff-port-066000: exit status 7 (68.136834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-330000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-330000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (5.168265167s)

                                                
                                                
-- stdout --
	* [newest-cni-330000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-330000 in cluster newest-cni-330000
	* Restarting existing qemu2 VM for "newest-cni-330000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-330000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:18:13.814166    5162 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:18:13.814277    5162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:18:13.814281    5162 out.go:304] Setting ErrFile to fd 2...
	I0213 15:18:13.814287    5162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:18:13.814413    5162 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:18:13.815378    5162 out.go:298] Setting JSON to false
	I0213 15:18:13.831560    5162 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2715,"bootTime":1707863578,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 15:18:13.831629    5162 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 15:18:13.836308    5162 out.go:177] * [newest-cni-330000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 15:18:13.842279    5162 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 15:18:13.847200    5162 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 15:18:13.842358    5162 notify.go:220] Checking for updates...
	I0213 15:18:13.853176    5162 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 15:18:13.856260    5162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 15:18:13.857826    5162 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 15:18:13.861193    5162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 15:18:13.864572    5162 config.go:182] Loaded profile config "newest-cni-330000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0213 15:18:13.864830    5162 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 15:18:13.869047    5162 out.go:177] * Using the qemu2 driver based on existing profile
	I0213 15:18:13.876238    5162 start.go:298] selected driver: qemu2
	I0213 15:18:13.876245    5162 start.go:902] validating driver "qemu2" against &{Name:newest-cni-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s Schedu
ledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:18:13.876306    5162 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 15:18:13.878628    5162 start_flags.go:946] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0213 15:18:13.878672    5162 cni.go:84] Creating CNI manager for ""
	I0213 15:18:13.878679    5162 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 15:18:13.878683    5162 start_flags.go:321] config:
	{Name:newest-cni-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-330000 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: Multi
NodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 15:18:13.883068    5162 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 15:18:13.890258    5162 out.go:177] * Starting control plane node newest-cni-330000 in cluster newest-cni-330000
	I0213 15:18:13.894315    5162 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0213 15:18:13.894330    5162 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0213 15:18:13.894339    5162 cache.go:56] Caching tarball of preloaded images
	I0213 15:18:13.894415    5162 preload.go:174] Found /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0213 15:18:13.894421    5162 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0213 15:18:13.894509    5162 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/newest-cni-330000/config.json ...
	I0213 15:18:13.895005    5162 start.go:365] acquiring machines lock for newest-cni-330000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:18:13.895033    5162 start.go:369] acquired machines lock for "newest-cni-330000" in 21.417µs
	I0213 15:18:13.895041    5162 start.go:96] Skipping create...Using existing machine configuration
	I0213 15:18:13.895047    5162 fix.go:54] fixHost starting: 
	I0213 15:18:13.895179    5162 fix.go:102] recreateIfNeeded on newest-cni-330000: state=Stopped err=<nil>
	W0213 15:18:13.895189    5162 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 15:18:13.898234    5162 out.go:177] * Restarting existing qemu2 VM for "newest-cni-330000" ...
	I0213 15:18:13.906193    5162 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:7c:d1:eb:cd:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/disk.qcow2
	I0213 15:18:13.908207    5162 main.go:141] libmachine: STDOUT: 
	I0213 15:18:13.908239    5162 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:18:13.908268    5162 fix.go:56] fixHost completed within 13.221708ms
	I0213 15:18:13.908280    5162 start.go:83] releasing machines lock for "newest-cni-330000", held for 13.237333ms
	W0213 15:18:13.908286    5162 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:18:13.908322    5162 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:18:13.908327    5162 start.go:709] Will try again in 5 seconds ...
	I0213 15:18:18.909888    5162 start.go:365] acquiring machines lock for newest-cni-330000: {Name:mkcf10d0c0c49a339bbe166f5c624542bb55a51b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 15:18:18.909950    5162 start.go:369] acquired machines lock for "newest-cni-330000" in 41.833µs
	I0213 15:18:18.909970    5162 start.go:96] Skipping create...Using existing machine configuration
	I0213 15:18:18.909974    5162 fix.go:54] fixHost starting: 
	I0213 15:18:18.910095    5162 fix.go:102] recreateIfNeeded on newest-cni-330000: state=Stopped err=<nil>
	W0213 15:18:18.910100    5162 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 15:18:18.918302    5162 out.go:177] * Restarting existing qemu2 VM for "newest-cni-330000" ...
	I0213 15:18:18.921345    5162 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:7c:d1:eb:cd:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18170-979/.minikube/machines/newest-cni-330000/disk.qcow2
	I0213 15:18:18.923480    5162 main.go:141] libmachine: STDOUT: 
	I0213 15:18:18.923499    5162 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0213 15:18:18.923521    5162 fix.go:56] fixHost completed within 13.546083ms
	I0213 15:18:18.923526    5162 start.go:83] releasing machines lock for "newest-cni-330000", held for 13.571791ms
	W0213 15:18:18.923576    5162 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-330000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-330000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0213 15:18:18.931263    5162 out.go:177] 
	W0213 15:18:18.934354    5162 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0213 15:18:18.934365    5162 out.go:239] * 
	* 
	W0213 15:18:18.934839    5162 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 15:18:18.949319    5162 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-330000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-330000 -n newest-cni-330000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-330000 -n newest-cni-330000: exit status 7 (32.3905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-330000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-066000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-066000 -n default-k8s-diff-port-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-066000 -n default-k8s-diff-port-066000: exit status 7 (33.109625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-066000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-066000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-066000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.998667ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-066000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-066000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-066000 -n default-k8s-diff-port-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-066000 -n default-k8s-diff-port-066000: exit status 7 (31.41375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-066000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-066000 -n default-k8s-diff-port-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-066000 -n default-k8s-diff-port-066000: exit status 7 (30.14025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-066000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-066000 --alsologtostderr -v=1: exit status 89 (41.339292ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-066000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:18:18.797885    5181 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:18:18.798049    5181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:18:18.798052    5181 out.go:304] Setting ErrFile to fd 2...
	I0213 15:18:18.798055    5181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:18:18.798178    5181 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:18:18.798379    5181 out.go:298] Setting JSON to false
	I0213 15:18:18.798388    5181 mustload.go:65] Loading cluster: default-k8s-diff-port-066000
	I0213 15:18:18.798563    5181 config.go:182] Loaded profile config "default-k8s-diff-port-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 15:18:18.803414    5181 out.go:177] * The control plane node must be running for this command
	I0213 15:18:18.806565    5181 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-066000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-066000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-066000 -n default-k8s-diff-port-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-066000 -n default-k8s-diff-port-066000: exit status 7 (30.565625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-066000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-066000 -n default-k8s-diff-port-066000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-066000 -n default-k8s-diff-port-066000: exit status 7 (30.727ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-066000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-330000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-330000 -n newest-cni-330000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-330000 -n newest-cni-330000: exit status 7 (32.989583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-330000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-330000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-330000 --alsologtostderr -v=1: exit status 89 (43.27925ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-330000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 15:18:19.097494    5202 out.go:291] Setting OutFile to fd 1 ...
	I0213 15:18:19.097637    5202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:18:19.097641    5202 out.go:304] Setting ErrFile to fd 2...
	I0213 15:18:19.097644    5202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 15:18:19.097795    5202 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 15:18:19.098020    5202 out.go:298] Setting JSON to false
	I0213 15:18:19.098030    5202 mustload.go:65] Loading cluster: newest-cni-330000
	I0213 15:18:19.098234    5202 config.go:182] Loaded profile config "newest-cni-330000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0213 15:18:19.102327    5202 out.go:177] * The control plane node must be running for this command
	I0213 15:18:19.106441    5202 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-330000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-330000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-330000 -n newest-cni-330000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-330000 -n newest-cni-330000: exit status 7 (31.559667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-330000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-330000 -n newest-cni-330000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-330000 -n newest-cni-330000: exit status 7 (32.517209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-330000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (162/271)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
9 TestDownloadOnly/v1.16.0/DeleteAll 0.24
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.28.4/json-events 26.94
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.23
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.23
21 TestDownloadOnly/v1.29.0-rc.2/json-events 20.18
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.24
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.22
30 TestBinaryMirror 0.36
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 201.71
38 TestAddons/parallel/Registry 18.36
40 TestAddons/parallel/InspektorGadget 10.24
41 TestAddons/parallel/MetricsServer 5.25
44 TestAddons/parallel/CSI 57.85
45 TestAddons/parallel/Headlamp 11.56
46 TestAddons/parallel/CloudSpanner 5.17
47 TestAddons/parallel/LocalPath 51.77
48 TestAddons/parallel/NvidiaDevicePlugin 5.16
49 TestAddons/parallel/Yakd 5
52 TestAddons/serial/GCPAuth/Namespaces 0.07
53 TestAddons/StoppedEnableDisable 12.28
61 TestHyperKitDriverInstallOrUpdate 10.49
64 TestErrorSpam/setup 151.42
65 TestErrorSpam/start 0.35
66 TestErrorSpam/status 0.26
67 TestErrorSpam/pause 0.69
68 TestErrorSpam/unpause 0.64
69 TestErrorSpam/stop 12.25
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 47.81
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 39.24
76 TestFunctional/serial/KubeContext 0.03
77 TestFunctional/serial/KubectlGetPods 0.05
80 TestFunctional/serial/CacheCmd/cache/add_remote 9.61
81 TestFunctional/serial/CacheCmd/cache/add_local 1.18
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
83 TestFunctional/serial/CacheCmd/cache/list 0.04
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.17
86 TestFunctional/serial/CacheCmd/cache/delete 0.08
87 TestFunctional/serial/MinikubeKubectlCmd 0.84
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.16
89 TestFunctional/serial/ExtraConfig 31.94
90 TestFunctional/serial/ComponentHealth 0.04
91 TestFunctional/serial/LogsCmd 0.67
92 TestFunctional/serial/LogsFileCmd 0.67
93 TestFunctional/serial/InvalidService 3.72
95 TestFunctional/parallel/ConfigCmd 0.24
96 TestFunctional/parallel/DashboardCmd 8.05
97 TestFunctional/parallel/DryRun 0.35
98 TestFunctional/parallel/InternationalLanguage 0.13
99 TestFunctional/parallel/StatusCmd 0.25
104 TestFunctional/parallel/AddonsCmd 0.13
105 TestFunctional/parallel/PersistentVolumeClaim 24.23
107 TestFunctional/parallel/SSHCmd 0.13
108 TestFunctional/parallel/CpCmd 0.42
110 TestFunctional/parallel/FileSync 0.07
111 TestFunctional/parallel/CertSync 0.41
115 TestFunctional/parallel/NodeLabels 0.04
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.06
119 TestFunctional/parallel/License 1.34
120 TestFunctional/parallel/Version/short 0.04
121 TestFunctional/parallel/Version/components 0.33
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.13
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.09
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
126 TestFunctional/parallel/ImageCommands/ImageBuild 6.31
127 TestFunctional/parallel/ImageCommands/Setup 5.49
128 TestFunctional/parallel/DockerEnv/bash 0.39
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
132 TestFunctional/parallel/ServiceCmd/DeployApp 15.09
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.21
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.5
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.53
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
137 TestFunctional/parallel/ServiceCmd/List 0.16
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.17
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.1
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.71
142 TestFunctional/parallel/ServiceCmd/Format 0.13
143 TestFunctional/parallel/ServiceCmd/URL 0.13
145 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.05
146 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.61
147 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.1
150 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
151 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
152 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
153 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
154 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
155 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
156 TestFunctional/parallel/ProfileCmd/profile_not_create 0.18
157 TestFunctional/parallel/ProfileCmd/profile_list 0.15
158 TestFunctional/parallel/ProfileCmd/profile_json_output 0.15
159 TestFunctional/parallel/MountCmd/any-port 10.97
160 TestFunctional/parallel/MountCmd/specific-port 1.23
161 TestFunctional/parallel/MountCmd/VerifyCleanup 0.69
162 TestFunctional/delete_addon-resizer_images 0.11
163 TestFunctional/delete_my-image_image 0.04
164 TestFunctional/delete_minikube_cached_images 0.04
168 TestImageBuild/serial/Setup 33.89
169 TestImageBuild/serial/NormalBuild 5.19
171 TestImageBuild/serial/BuildWithDockerIgnore 0.14
172 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.11
175 TestIngressAddonLegacy/StartLegacyK8sCluster 115.57
177 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 15.85
178 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.25
182 TestJSONOutput/start/Command 46.57
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.28
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.23
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 12.08
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.32
210 TestMainNoArgs 0.03
211 TestMinikubeProfile 185.82
257 TestStoppedBinaryUpgrade/Setup 5.19
269 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
273 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
274 TestNoKubernetes/serial/ProfileList 31.46
275 TestNoKubernetes/serial/Stop 0.08
277 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
286 TestStoppedBinaryUpgrade/MinikubeLogs 0.65
292 TestStartStop/group/old-k8s-version/serial/Stop 0.07
293 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.1
305 TestStartStop/group/no-preload/serial/Stop 0.07
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.1
310 TestStartStop/group/embed-certs/serial/Stop 0.06
311 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.1
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.07
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.1
330 TestStartStop/group/newest-cni/serial/DeployApp 0
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
332 TestStartStop/group/newest-cni/serial/Stop 0.07
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.1
339 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-048000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-048000: exit status 85 (96.376084ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-048000 | jenkins | v1.32.0 | 13 Feb 24 14:38 PST |          |
	|         | -p download-only-048000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 14:38:44
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 14:38:44.220155    1409 out.go:291] Setting OutFile to fd 1 ...
	I0213 14:38:44.220308    1409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:38:44.220311    1409 out.go:304] Setting ErrFile to fd 2...
	I0213 14:38:44.220314    1409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:38:44.220427    1409 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	W0213 14:38:44.220511    1409 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18170-979/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18170-979/.minikube/config/config.json: no such file or directory
	I0213 14:38:44.221732    1409 out.go:298] Setting JSON to true
	I0213 14:38:44.238782    1409 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":346,"bootTime":1707863578,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 14:38:44.238854    1409 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 14:38:44.244641    1409 out.go:97] [download-only-048000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 14:38:44.247703    1409 out.go:169] MINIKUBE_LOCATION=18170
	I0213 14:38:44.244780    1409 notify.go:220] Checking for updates...
	W0213 14:38:44.244798    1409 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball: no such file or directory
	I0213 14:38:44.255621    1409 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 14:38:44.258698    1409 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 14:38:44.261685    1409 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 14:38:44.264669    1409 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	W0213 14:38:44.270687    1409 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0213 14:38:44.270893    1409 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 14:38:44.275576    1409 out.go:97] Using the qemu2 driver based on user configuration
	I0213 14:38:44.275593    1409 start.go:298] selected driver: qemu2
	I0213 14:38:44.275605    1409 start.go:902] validating driver "qemu2" against <nil>
	I0213 14:38:44.275669    1409 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 14:38:44.278673    1409 out.go:169] Automatically selected the socket_vmnet network
	I0213 14:38:44.284260    1409 start_flags.go:392] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0213 14:38:44.284337    1409 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 14:38:44.284445    1409 cni.go:84] Creating CNI manager for ""
	I0213 14:38:44.284460    1409 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0213 14:38:44.284463    1409 start_flags.go:321] config:
	{Name:download-only-048000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-048000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 14:38:44.289924    1409 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 14:38:44.293697    1409 out.go:97] Downloading VM boot image ...
	I0213 14:38:44.293728    1409 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/iso/arm64/minikube-v1.32.1-1703784139-17866-arm64.iso
	I0213 14:39:02.399186    1409 out.go:97] Starting control plane node download-only-048000 in cluster download-only-048000
	I0213 14:39:02.399231    1409 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 14:39:02.699758    1409 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0213 14:39:02.699847    1409 cache.go:56] Caching tarball of preloaded images
	I0213 14:39:02.700583    1409 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 14:39:02.706193    1409 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0213 14:39:02.706221    1409 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0213 14:39:03.321123    1409 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0213 14:39:21.981450    1409 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0213 14:39:21.981602    1409 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0213 14:39:22.633887    1409 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0213 14:39:22.634076    1409 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/download-only-048000/config.json ...
	I0213 14:39:22.634094    1409 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/download-only-048000/config.json: {Name:mkcf4d3fc36f141969847e9612eb45eb33c0fc17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:39:22.634334    1409 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0213 14:39:22.634521    1409 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0213 14:39:23.335632    1409 out.go:169] 
	W0213 14:39:23.340670    1409 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/18170-979/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106667080 0x106667080 0x106667080 0x106667080 0x106667080 0x106667080 0x106667080] Decompressors:map[bz2:0x140004961c0 gz:0x140004961c8 tar:0x14000496170 tar.bz2:0x14000496180 tar.gz:0x14000496190 tar.xz:0x140004961a0 tar.zst:0x140004961b0 tbz2:0x14000496180 tgz:0x14000496190 txz:0x140004961a0 tzst:0x140004961b0 xz:0x140004961d0 zip:0x140004961e0 zst:0x140004961d8] Getters:map[file:0x14000cf8600 http:0x140008d42d0 https:0x140008d4320] Dir:false ProgressListener:
<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0213 14:39:23.340693    1409 out_reason.go:110] 
	W0213 14:39:23.348518    1409 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 14:39:23.352553    1409 out.go:169] 
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-048000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-048000
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (26.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-938000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-938000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 : (26.938910333s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (26.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-938000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-938000: exit status 85 (75.094416ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-048000 | jenkins | v1.32.0 | 13 Feb 24 14:38 PST |                     |
	|         | -p download-only-048000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 13 Feb 24 14:39 PST | 13 Feb 24 14:39 PST |
	| delete  | -p download-only-048000        | download-only-048000 | jenkins | v1.32.0 | 13 Feb 24 14:39 PST | 13 Feb 24 14:39 PST |
	| start   | -o=json --download-only        | download-only-938000 | jenkins | v1.32.0 | 13 Feb 24 14:39 PST |                     |
	|         | -p download-only-938000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 14:39:24
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 14:39:24.024854    1463 out.go:291] Setting OutFile to fd 1 ...
	I0213 14:39:24.024991    1463 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:39:24.024994    1463 out.go:304] Setting ErrFile to fd 2...
	I0213 14:39:24.024997    1463 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:39:24.025121    1463 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 14:39:24.026106    1463 out.go:298] Setting JSON to true
	I0213 14:39:24.042100    1463 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":386,"bootTime":1707863578,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 14:39:24.042189    1463 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 14:39:24.046986    1463 out.go:97] [download-only-938000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 14:39:24.050978    1463 out.go:169] MINIKUBE_LOCATION=18170
	I0213 14:39:24.047086    1463 notify.go:220] Checking for updates...
	I0213 14:39:24.057976    1463 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 14:39:24.060999    1463 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 14:39:24.063906    1463 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 14:39:24.067013    1463 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	W0213 14:39:24.071540    1463 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0213 14:39:24.071679    1463 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 14:39:24.074971    1463 out.go:97] Using the qemu2 driver based on user configuration
	I0213 14:39:24.074979    1463 start.go:298] selected driver: qemu2
	I0213 14:39:24.074982    1463 start.go:902] validating driver "qemu2" against <nil>
	I0213 14:39:24.075032    1463 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 14:39:24.077951    1463 out.go:169] Automatically selected the socket_vmnet network
	I0213 14:39:24.083036    1463 start_flags.go:392] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0213 14:39:24.083122    1463 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 14:39:24.083163    1463 cni.go:84] Creating CNI manager for ""
	I0213 14:39:24.083169    1463 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 14:39:24.083174    1463 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 14:39:24.083183    1463 start_flags.go:321] config:
	{Name:download-only-938000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-938000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 14:39:24.087383    1463 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 14:39:24.089963    1463 out.go:97] Starting control plane node download-only-938000 in cluster download-only-938000
	I0213 14:39:24.089972    1463 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 14:39:24.758939    1463 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 14:39:24.759040    1463 cache.go:56] Caching tarball of preloaded images
	I0213 14:39:24.759812    1463 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 14:39:24.765375    1463 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0213 14:39:24.765400    1463 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0213 14:39:25.360385    1463 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4?checksum=md5:6fb922d1d9dc01a9d3c0b965ed219613 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0213 14:39:43.132389    1463 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0213 14:39:43.132563    1463 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0213 14:39:43.716028    1463 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0213 14:39:43.716228    1463 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/download-only-938000/config.json ...
	I0213 14:39:43.716242    1463 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/download-only-938000/config.json: {Name:mk1b99de5ae3b752288cb80770b5b207da108ef6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:39:43.716506    1463 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0213 14:39:43.716636    1463 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/darwin/arm64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-938000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-938000
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (20.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-091000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-091000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=qemu2 : (20.174897125s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (20.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-091000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-091000: exit status 85 (79.271209ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-048000 | jenkins | v1.32.0 | 13 Feb 24 14:38 PST |                     |
	|         | -p download-only-048000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 13 Feb 24 14:39 PST | 13 Feb 24 14:39 PST |
	| delete  | -p download-only-048000           | download-only-048000 | jenkins | v1.32.0 | 13 Feb 24 14:39 PST | 13 Feb 24 14:39 PST |
	| start   | -o=json --download-only           | download-only-938000 | jenkins | v1.32.0 | 13 Feb 24 14:39 PST |                     |
	|         | -p download-only-938000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 13 Feb 24 14:39 PST | 13 Feb 24 14:39 PST |
	| delete  | -p download-only-938000           | download-only-938000 | jenkins | v1.32.0 | 13 Feb 24 14:39 PST | 13 Feb 24 14:39 PST |
	| start   | -o=json --download-only           | download-only-091000 | jenkins | v1.32.0 | 13 Feb 24 14:39 PST |                     |
	|         | -p download-only-091000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 14:39:51
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 14:39:51.497814    1499 out.go:291] Setting OutFile to fd 1 ...
	I0213 14:39:51.497967    1499 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:39:51.497970    1499 out.go:304] Setting ErrFile to fd 2...
	I0213 14:39:51.497972    1499 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:39:51.498119    1499 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 14:39:51.499119    1499 out.go:298] Setting JSON to true
	I0213 14:39:51.515229    1499 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":413,"bootTime":1707863578,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 14:39:51.515292    1499 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 14:39:51.520105    1499 out.go:97] [download-only-091000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 14:39:51.524091    1499 out.go:169] MINIKUBE_LOCATION=18170
	I0213 14:39:51.520201    1499 notify.go:220] Checking for updates...
	I0213 14:39:51.532050    1499 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 14:39:51.535093    1499 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 14:39:51.538115    1499 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 14:39:51.540996    1499 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	W0213 14:39:51.547086    1499 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0213 14:39:51.547298    1499 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 14:39:51.548769    1499 out.go:97] Using the qemu2 driver based on user configuration
	I0213 14:39:51.548779    1499 start.go:298] selected driver: qemu2
	I0213 14:39:51.548782    1499 start.go:902] validating driver "qemu2" against <nil>
	I0213 14:39:51.548830    1499 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 14:39:51.552061    1499 out.go:169] Automatically selected the socket_vmnet network
	I0213 14:39:51.557210    1499 start_flags.go:392] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0213 14:39:51.557297    1499 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 14:39:51.557337    1499 cni.go:84] Creating CNI manager for ""
	I0213 14:39:51.557345    1499 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0213 14:39:51.557350    1499 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 14:39:51.557354    1499 start_flags.go:321] config:
	{Name:download-only-091000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-091000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 14:39:51.561776    1499 iso.go:125] acquiring lock: {Name:mk34fb77f66ac5e2b24bb14f9b4fa1a96d01e1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 14:39:51.565162    1499 out.go:97] Starting control plane node download-only-091000 in cluster download-only-091000
	I0213 14:39:51.565177    1499 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0213 14:39:52.233253    1499 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0213 14:39:52.233304    1499 cache.go:56] Caching tarball of preloaded images
	I0213 14:39:52.234098    1499 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0213 14:39:52.240477    1499 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0213 14:39:52.240509    1499 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0213 14:39:52.831049    1499 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4?checksum=md5:ec278d0a65e5e64ee0e67f51e14b1867 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0213 14:40:09.582651    1499 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0213 14:40:09.582814    1499 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18170-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0213 14:40:10.138657    1499 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0213 14:40:10.138854    1499 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/download-only-091000/config.json ...
	I0213 14:40:10.138872    1499 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/download-only-091000/config.json: {Name:mk14beda69dae197718de7f1f725037bc7c2662f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 14:40:10.139095    1499 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0213 14:40:10.139219    1499 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18170-979/.minikube/cache/darwin/arm64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-091000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-091000
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.36s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-956000 --alsologtostderr --binary-mirror http://127.0.0.1:49326 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-956000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-956000
--- PASS: TestBinaryMirror (0.36s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-975000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-975000: exit status 85 (57.681125ms)

                                                
                                                
-- stdout --
	* Profile "addons-975000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-975000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-975000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-975000: exit status 85 (61.552625ms)

                                                
                                                
-- stdout --
	* Profile "addons-975000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-975000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (201.71s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-975000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-darwin-arm64 start -p addons-975000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m21.713101083s)
--- PASS: TestAddons/Setup (201.71s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 7.265042ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-wqg2d" [81878e2a-d1ef-4326-86c8-ad5b59db464e] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004685042s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-dcfkf" [bb9bcd40-a4eb-481d-8ed1-556b77cb39c6] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004130958s
addons_test.go:340: (dbg) Run:  kubectl --context addons-975000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-975000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-975000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.008050792s)
addons_test.go:359: (dbg) Run:  out/minikube-darwin-arm64 -p addons-975000 ip
2024/02/13 14:43:53 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-darwin-arm64 -p addons-975000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.36s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-m7vqd" [0ed7b827-c476-467c-83f1-ec43c5124741] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004638791s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-975000
addons_test.go:841: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-975000: (5.235991167s)
--- PASS: TestAddons/parallel/InspektorGadget (10.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 2.318708ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-jvnzm" [675f5d76-e24a-43d5-9312-34d0e1ece10e] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004339417s
addons_test.go:415: (dbg) Run:  kubectl --context addons-975000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-arm64 -p addons-975000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.85s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 8.218125ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-975000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-975000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [de40740a-0d99-4020-87d7-473c8fb949fe] Pending
helpers_test.go:344: "task-pv-pod" [de40740a-0d99-4020-87d7-473c8fb949fe] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [de40740a-0d99-4020-87d7-473c8fb949fe] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004440916s
addons_test.go:584: (dbg) Run:  kubectl --context addons-975000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-975000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-975000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-975000 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-975000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-975000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-975000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [4b611eef-79e5-4285-bb48-4ed60b443507] Pending
helpers_test.go:344: "task-pv-pod-restore" [4b611eef-79e5-4285-bb48-4ed60b443507] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [4b611eef-79e5-4285-bb48-4ed60b443507] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003987959s
addons_test.go:626: (dbg) Run:  kubectl --context addons-975000 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-975000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-975000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-arm64 -p addons-975000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-arm64 -p addons-975000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.120216542s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-arm64 -p addons-975000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (57.85s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-975000 --alsologtostderr -v=1
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-2k767" [9a10eb58-fe91-4da3-b012-9445ee6e8fc3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-2k767" [9a10eb58-fe91-4da3-b012-9445ee6e8fc3] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004126375s
--- PASS: TestAddons/parallel/Headlamp (11.56s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-6pftl" [0501f006-c26f-426c-9904-91939856f50f] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004270583s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-975000
--- PASS: TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.77s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-975000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-975000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-975000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3d8b6840-83e4-4a0a-afb7-94305920c666] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3d8b6840-83e4-4a0a-afb7-94305920c666] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3d8b6840-83e4-4a0a-afb7-94305920c666] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003685667s
addons_test.go:891: (dbg) Run:  kubectl --context addons-975000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-arm64 -p addons-975000 ssh "cat /opt/local-path-provisioner/pvc-69fc1814-f173-4904-b8e0-9dadd6946f89_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-975000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-975000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-arm64 -p addons-975000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-arm64 -p addons-975000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.312454083s)
--- PASS: TestAddons/parallel/LocalPath (51.77s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-m85cq" [24ef4152-2e8a-4d81-8c22-3ada9c124d45] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0040845s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-975000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.16s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-cgk6p" [3933bd43-84ed-4871-95e7-6004400d13c7] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00401975s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-975000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-975000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.28s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-975000
addons_test.go:172: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-975000: (12.08608175s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-975000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-975000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-975000
--- PASS: TestAddons/StoppedEnableDisable (12.28s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.49s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.49s)

                                                
                                    
x
+
TestErrorSpam/setup (151.42s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-817000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-817000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-817000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-817000 --driver=qemu2 : (2m31.423542667s)
--- PASS: TestErrorSpam/setup (151.42s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-817000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-817000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-817000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-817000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-817000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-817000 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.26s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-817000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-817000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-817000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-817000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-817000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-817000 status
--- PASS: TestErrorSpam/status (0.26s)

                                                
                                    
x
+
TestErrorSpam/pause (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-817000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-817000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-817000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-817000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-817000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-817000 pause
--- PASS: TestErrorSpam/pause (0.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-817000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-817000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-817000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-817000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-817000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-817000 unpause
--- PASS: TestErrorSpam/unpause (0.64s)

                                                
                                    
x
+
TestErrorSpam/stop (12.25s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-817000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-817000 stop
E0213 14:48:34.979936    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt: no such file or directory
E0213 14:48:34.988624    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt: no such file or directory
E0213 14:48:35.000764    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt: no such file or directory
E0213 14:48:35.022861    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt: no such file or directory
E0213 14:48:35.065039    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt: no such file or directory
E0213 14:48:35.146347    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt: no such file or directory
E0213 14:48:35.308348    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt: no such file or directory
E0213 14:48:35.629976    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt: no such file or directory
E0213 14:48:36.269919    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt: no such file or directory
E0213 14:48:37.551190    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt: no such file or directory
E0213 14:48:40.111574    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt: no such file or directory
E0213 14:48:45.231076    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-817000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-817000 stop: (12.083740166s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-817000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-817000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-817000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-817000 stop
--- PASS: TestErrorSpam/stop (12.25s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18170-979/.minikube/files/etc/test/nested/copy/1407/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.81s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-023000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0213 14:48:55.469572    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt: no such file or directory
E0213 14:49:15.946503    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-023000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (47.804949709s)
--- PASS: TestFunctional/serial/StartWithProxy (47.81s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.24s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-023000 --alsologtostderr -v=8
E0213 14:49:56.906548    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-023000 --alsologtostderr -v=8: (39.237039416s)
functional_test.go:659: soft start took 39.237406792s for "functional-023000" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.24s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-023000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (9.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-023000 cache add registry.k8s.io/pause:3.1: (3.916067708s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-023000 cache add registry.k8s.io/pause:3.3: (3.352815458s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-023000 cache add registry.k8s.io/pause:latest: (2.337070167s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (9.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-023000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local4696157/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 cache add minikube-local-cache-test:functional-023000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 cache delete minikube-local-cache-test:functional-023000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-023000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-023000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (74.592083ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-arm64 -p functional-023000 cache reload: (1.938864209s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 kubectl -- --context functional-023000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.84s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-023000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-023000 get pods: (1.16360425s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.94s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-023000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-023000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.938436875s)
functional_test.go:757: restart took 31.938560041s for "functional-023000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.94s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-023000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.67s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd3534536605/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.67s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.72s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-023000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-023000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-023000: exit status 115 (109.406208ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:32658 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-023000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.72s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-023000 config get cpus: exit status 14 (35.692167ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-023000 config get cpus: exit status 14 (32.682875ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-023000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-023000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2396: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.05s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-023000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-023000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (122.163417ms)

                                                
                                                
-- stdout --
	* [functional-023000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 14:52:02.192238    2358 out.go:291] Setting OutFile to fd 1 ...
	I0213 14:52:02.192357    2358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:52:02.192361    2358 out.go:304] Setting ErrFile to fd 2...
	I0213 14:52:02.192364    2358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:52:02.192499    2358 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 14:52:02.193582    2358 out.go:298] Setting JSON to false
	I0213 14:52:02.211981    2358 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1144,"bootTime":1707863578,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 14:52:02.212077    2358 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 14:52:02.216561    2358 out.go:177] * [functional-023000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0213 14:52:02.223574    2358 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 14:52:02.227473    2358 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 14:52:02.223613    2358 notify.go:220] Checking for updates...
	I0213 14:52:02.235551    2358 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 14:52:02.238591    2358 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 14:52:02.241630    2358 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 14:52:02.244563    2358 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 14:52:02.247849    2358 config.go:182] Loaded profile config "functional-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 14:52:02.248104    2358 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 14:52:02.252559    2358 out.go:177] * Using the qemu2 driver based on existing profile
	I0213 14:52:02.259509    2358 start.go:298] selected driver: qemu2
	I0213 14:52:02.259514    2358 start.go:902] validating driver "qemu2" against &{Name:functional-023000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:functional-023000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 14:52:02.259557    2358 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 14:52:02.266582    2358 out.go:177] 
	W0213 14:52:02.270512    2358 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0213 14:52:02.274550    2358 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-023000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-023000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-023000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (130.902292ms)

                                                
                                                
-- stdout --
	* [functional-023000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 14:52:02.538844    2378 out.go:291] Setting OutFile to fd 1 ...
	I0213 14:52:02.542801    2378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:52:02.542807    2378 out.go:304] Setting ErrFile to fd 2...
	I0213 14:52:02.542810    2378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 14:52:02.543008    2378 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
	I0213 14:52:02.550893    2378 out.go:298] Setting JSON to false
	I0213 14:52:02.568330    2378 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1144,"bootTime":1707863578,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0213 14:52:02.568423    2378 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0213 14:52:02.577532    2378 out.go:177] * [functional-023000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	I0213 14:52:02.587540    2378 out.go:177]   - MINIKUBE_LOCATION=18170
	I0213 14:52:02.590496    2378 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	I0213 14:52:02.587657    2378 notify.go:220] Checking for updates...
	I0213 14:52:02.597537    2378 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0213 14:52:02.600517    2378 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 14:52:02.603528    2378 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	I0213 14:52:02.606554    2378 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 14:52:02.608295    2378 config.go:182] Loaded profile config "functional-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0213 14:52:02.608551    2378 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 14:52:02.612504    2378 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0213 14:52:02.619361    2378 start.go:298] selected driver: qemu2
	I0213 14:52:02.619367    2378 start.go:902] validating driver "qemu2" against &{Name:functional-023000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:functional-023000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 14:52:02.619433    2378 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 14:52:02.625486    2378 out.go:177] 
	W0213 14:52:02.629529    2378 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0213 14:52:02.633500    2378 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [76f674ca-abf0-4a30-9686-1037b7406533] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003494958s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-023000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-023000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-023000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-023000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f1145b35-f165-4eff-895d-ca697f77cb19] Pending
helpers_test.go:344: "sp-pod" [f1145b35-f165-4eff-895d-ca697f77cb19] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f1145b35-f165-4eff-895d-ca697f77cb19] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004455708s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-023000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-023000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-023000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [26483086-08a8-44d9-b402-c4d592d88eaf] Pending
helpers_test.go:344: "sp-pod" [26483086-08a8-44d9-b402-c4d592d88eaf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [26483086-08a8-44d9-b402-c4d592d88eaf] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00388925s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-023000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.23s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh -n functional-023000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 cp functional-023000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2001473845/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh -n functional-023000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh -n functional-023000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1407/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh "sudo cat /etc/test/nested/copy/1407/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1407.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh "sudo cat /etc/ssl/certs/1407.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1407.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh "sudo cat /usr/share/ca-certificates/1407.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/14072.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh "sudo cat /etc/ssl/certs/14072.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/14072.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh "sudo cat /usr/share/ca-certificates/14072.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-023000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-023000 ssh "sudo systemctl is-active crio": exit status 1 (63.313583ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-arm64 license: (1.336967125s)
--- PASS: TestFunctional/parallel/License (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-023000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-023000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-023000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-023000 image ls --format short --alsologtostderr:
I0213 14:52:03.562044    2403 out.go:291] Setting OutFile to fd 1 ...
I0213 14:52:03.562207    2403 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 14:52:03.562211    2403 out.go:304] Setting ErrFile to fd 2...
I0213 14:52:03.562214    2403 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 14:52:03.562360    2403 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
I0213 14:52:03.562752    2403 config.go:182] Loaded profile config "functional-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 14:52:03.562809    2403 config.go:182] Loaded profile config "functional-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 14:52:03.563730    2403 ssh_runner.go:195] Run: systemctl --version
I0213 14:52:03.563740    2403 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/functional-023000/id_rsa Username:docker}
I0213 14:52:03.591405    2403 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-023000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | alpine            | d315ef79be32c | 43.5MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-023000 | 524e80b107f86 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.28.4           | 05c284c929889 | 57.8MB |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 3ca3ca488cf13 | 68.4MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/google-containers/addon-resizer      | functional-023000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| docker.io/localhost/my-image                | functional-023000 | ad66e991fcb43 | 1.41MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | 9961cbceaf234 | 116MB  |
| docker.io/library/nginx                     | latest            | 11deb55301007 | 192MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| gcr.io/k8s-minikube/busybox                 | latest            | 71a676dd070f4 | 1.41MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 04b4c447bb9d4 | 120MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-023000 image ls --format table --alsologtostderr:
I0213 14:52:10.156323    2419 out.go:291] Setting OutFile to fd 1 ...
I0213 14:52:10.156471    2419 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 14:52:10.156475    2419 out.go:304] Setting ErrFile to fd 2...
I0213 14:52:10.156477    2419 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 14:52:10.156604    2419 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
I0213 14:52:10.157021    2419 config.go:182] Loaded profile config "functional-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 14:52:10.157079    2419 config.go:182] Loaded profile config "functional-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 14:52:10.158077    2419 ssh_runner.go:195] Run: systemctl --version
I0213 14:52:10.158087    2419 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/functional-023000/id_rsa Username:docker}
I0213 14:52:10.185507    2419 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/02/13 14:52:10 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-023000 image ls --format json --alsologtostderr:
[{"id":"524e80b107f8686c174ff1afa7935644aca7627208e334e0239bd2f900c79284","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-023000"],"size":"30"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"57800000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-023000"],"size":"32900000"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"116000000"},{"id":"d315ef79be32cd8ae44f153a41c42e5e407c04f959074ddb8acc2c26649e2676","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43500000"},{"id":"11deb553
01007d6bf1db2ce20cb5d12e447541969990af4a03e2af8141ebdbed","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"120000000"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"68400000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3
d02","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1410000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"ad66e991fcb43902c44bd785437be6bbc75b0af9931042a3b0947f099719e83e","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-023000"],"size":"1410000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/stora
ge-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-023000 image ls --format json --alsologtostderr:
I0213 14:52:10.077562    2417 out.go:291] Setting OutFile to fd 1 ...
I0213 14:52:10.077706    2417 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 14:52:10.077709    2417 out.go:304] Setting ErrFile to fd 2...
I0213 14:52:10.077712    2417 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 14:52:10.077847    2417 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
I0213 14:52:10.078364    2417 config.go:182] Loaded profile config "functional-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 14:52:10.078434    2417 config.go:182] Loaded profile config "functional-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 14:52:10.079375    2417 ssh_runner.go:195] Run: systemctl --version
I0213 14:52:10.079390    2417 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/functional-023000/id_rsa Username:docker}
I0213 14:52:10.108792    2417 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-023000 image ls --format yaml --alsologtostderr:
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-023000
size: "32900000"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "120000000"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "116000000"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "57800000"
- id: d315ef79be32cd8ae44f153a41c42e5e407c04f959074ddb8acc2c26649e2676
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43500000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "68400000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 524e80b107f8686c174ff1afa7935644aca7627208e334e0239bd2f900c79284
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-023000
size: "30"
- id: 11deb55301007d6bf1db2ce20cb5d12e447541969990af4a03e2af8141ebdbed
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-023000 image ls --format yaml --alsologtostderr:
I0213 14:52:03.690482    2405 out.go:291] Setting OutFile to fd 1 ...
I0213 14:52:03.690651    2405 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 14:52:03.690659    2405 out.go:304] Setting ErrFile to fd 2...
I0213 14:52:03.690661    2405 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 14:52:03.690812    2405 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
I0213 14:52:03.691251    2405 config.go:182] Loaded profile config "functional-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 14:52:03.691315    2405 config.go:182] Loaded profile config "functional-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 14:52:03.692281    2405 ssh_runner.go:195] Run: systemctl --version
I0213 14:52:03.692290    2405 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/functional-023000/id_rsa Username:docker}
I0213 14:52:03.718052    2405 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-023000 ssh pgrep buildkitd: exit status 1 (63.708667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 image build -t localhost/my-image:functional-023000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-023000 image build -t localhost/my-image:functional-023000 testdata/build --alsologtostderr: (6.165196375s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-023000 image build -t localhost/my-image:functional-023000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in 3ba10c654392
Removing intermediate container 3ba10c654392
---> ffa9c4416e9b
Step 3/3 : ADD content.txt /
---> ad66e991fcb4
Successfully built ad66e991fcb4
Successfully tagged localhost/my-image:functional-023000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-023000 image build -t localhost/my-image:functional-023000 testdata/build --alsologtostderr:
I0213 14:52:03.832096    2409 out.go:291] Setting OutFile to fd 1 ...
I0213 14:52:03.832329    2409 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 14:52:03.832332    2409 out.go:304] Setting ErrFile to fd 2...
I0213 14:52:03.832334    2409 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 14:52:03.832471    2409 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18170-979/.minikube/bin
I0213 14:52:03.832899    2409 config.go:182] Loaded profile config "functional-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 14:52:03.833569    2409 config.go:182] Loaded profile config "functional-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 14:52:03.834525    2409 ssh_runner.go:195] Run: systemctl --version
I0213 14:52:03.834536    2409 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18170-979/.minikube/machines/functional-023000/id_rsa Username:docker}
I0213 14:52:03.861419    2409 build_images.go:151] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2903454206.tar
I0213 14:52:03.861472    2409 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0213 14:52:03.864371    2409 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2903454206.tar
I0213 14:52:03.865827    2409 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2903454206.tar: stat -c "%s %y" /var/lib/minikube/build/build.2903454206.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2903454206.tar': No such file or directory
I0213 14:52:03.865841    2409 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2903454206.tar --> /var/lib/minikube/build/build.2903454206.tar (3072 bytes)
I0213 14:52:03.872901    2409 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2903454206
I0213 14:52:03.875753    2409 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2903454206 -xf /var/lib/minikube/build/build.2903454206.tar
I0213 14:52:03.878939    2409 docker.go:360] Building image: /var/lib/minikube/build/build.2903454206
I0213 14:52:03.878971    2409 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-023000 /var/lib/minikube/build/build.2903454206
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0213 14:52:09.954046    2409 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-023000 /var/lib/minikube/build/build.2903454206: (6.075244667s)
I0213 14:52:09.954114    2409 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2903454206
I0213 14:52:09.957164    2409 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2903454206.tar
I0213 14:52:09.959845    2409 build_images.go:207] Built localhost/my-image:functional-023000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2903454206.tar
I0213 14:52:09.959860    2409 build_images.go:123] succeeded building to: functional-023000
I0213 14:52:09.959863    2409 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.448327958s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-023000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.49s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-023000 docker-env) && out/minikube-darwin-arm64 status -p functional-023000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-023000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (15.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-023000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-023000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-xzst7" [d70d9020-134f-4d0e-b670-298fb9f63319] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-xzst7" [d70d9020-134f-4d0e-b670-298fb9f63319] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0213 14:51:18.826215    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt: no such file or directory
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 15.00303575s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (15.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 image load --daemon gcr.io/google-containers/addon-resizer:functional-023000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-023000 image load --daemon gcr.io/google-containers/addon-resizer:functional-023000 --alsologtostderr: (2.128758833s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 image load --daemon gcr.io/google-containers/addon-resizer:functional-023000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-023000 image load --daemon gcr.io/google-containers/addon-resizer:functional-023000 --alsologtostderr: (1.430192333s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.592164709s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-023000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 image load --daemon gcr.io/google-containers/addon-resizer:functional-023000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-023000 image load --daemon gcr.io/google-containers/addon-resizer:functional-023000 --alsologtostderr: (1.816591458s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 image save gcr.io/google-containers/addon-resizer:functional-023000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 image rm gcr.io/google-containers/addon-resizer:functional-023000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 service list -o json
functional_test.go:1490: Took "95.999709ms" to run "out/minikube-darwin-arm64 -p functional-023000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:31957
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:31957
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-023000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-023000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-023000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-023000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2224: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-023000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 image save --daemon gcr.io/google-containers/addon-resizer:functional-023000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-023000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-023000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-023000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [2ba3ebed-c39b-4a0f-8a3e-687cd79baec3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [2ba3ebed-c39b-4a0f-8a3e-687cd79baec3] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003929667s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-023000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.98.254 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-023000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "115.504625ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "38.841ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "112.385375ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "36.3725ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-023000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2134871966/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1707864710097207000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2134871966/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1707864710097207000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2134871966/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1707864710097207000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2134871966/001/test-1707864710097207000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-023000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (62.90725ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-023000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (63.309458ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-023000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (62.442959ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 13 22:51 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 13 22:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 13 22:51 test-1707864710097207000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh cat /mount-9p/test-1707864710097207000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-023000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [05417798-acfc-429f-bd82-07978c196c97] Pending
helpers_test.go:344: "busybox-mount" [05417798-acfc-429f-bd82-07978c196c97] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [05417798-acfc-429f-bd82-07978c196c97] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [05417798-acfc-429f-bd82-07978c196c97] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.004014208s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-023000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-023000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2134871966/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-023000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2131026763/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-023000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (64.570542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-023000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2131026763/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-023000 ssh "sudo umount -f /mount-9p": exit status 1 (67.576666ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-023000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-023000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2131026763/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-023000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3430333438/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-023000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3430333438/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-023000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3430333438/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-023000 ssh "findmnt -T" /mount1: exit status 1 (82.687375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-023000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-023000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-023000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3430333438/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-023000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3430333438/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-023000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3430333438/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.69s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-023000
--- PASS: TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-023000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-023000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (33.89s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-128000 --driver=qemu2 
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-128000 --driver=qemu2 : (33.890790625s)
--- PASS: TestImageBuild/serial/Setup (33.89s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (5.19s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-128000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-128000: (5.18516525s)
--- PASS: TestImageBuild/serial/NormalBuild (5.19s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.14s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-128000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.14s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-128000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.11s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (115.57s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-632000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
E0213 14:53:34.956764    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt: no such file or directory
E0213 14:54:02.663400    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-632000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m55.566003208s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (115.57s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (15.85s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-632000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-632000 addons enable ingress --alsologtostderr -v=5: (15.84620675s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (15.85s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.25s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-632000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.25s)

                                                
                                    
x
+
TestJSONOutput/start/Command (46.57s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-935000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E0213 14:56:08.474392    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/client.crt: no such file or directory
E0213 14:56:08.480732    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/client.crt: no such file or directory
E0213 14:56:08.492828    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/client.crt: no such file or directory
E0213 14:56:08.514900    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/client.crt: no such file or directory
E0213 14:56:08.556946    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/client.crt: no such file or directory
E0213 14:56:08.638976    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/client.crt: no such file or directory
E0213 14:56:08.801016    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/client.crt: no such file or directory
E0213 14:56:09.123041    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/client.crt: no such file or directory
E0213 14:56:09.765092    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/client.crt: no such file or directory
E0213 14:56:11.047223    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/client.crt: no such file or directory
E0213 14:56:13.609536    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/client.crt: no such file or directory
E0213 14:56:18.730490    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/client.crt: no such file or directory
E0213 14:56:28.972321    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-935000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (46.571489625s)
--- PASS: TestJSONOutput/start/Command (46.57s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.28s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-935000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.28s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.23s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-935000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.23s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-935000 --output=json --user=testUser
E0213 14:56:49.453916    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-935000 --output=json --user=testUser: (12.076898791s)
--- PASS: TestJSONOutput/stop/Command (12.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-625000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-625000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.848375ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9ff81788-bc50-4fbd-adb8-59ee0d9c5456","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-625000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0a94ef80-670d-4046-85b3-79335f299dc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18170"}}
	{"specversion":"1.0","id":"10dc7475-95c8-425b-a710-640c4968ef2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig"}}
	{"specversion":"1.0","id":"6f53dc9f-2ee0-4745-ad62-3e2c52452731","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"d5857054-36c9-4f17-90c7-135846910736","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"11035b4b-8492-47f2-a19c-49d26e8329ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube"}}
	{"specversion":"1.0","id":"770828d7-7553-4454-9a59-d9f1d9a70d9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f1828e6a-b212-4ea1-8406-1fc8b4d9dad0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-625000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-625000
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMinikubeProfile (185.82s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-376000 --driver=qemu2 
E0213 14:57:30.415291    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/client.crt: no such file or directory
E0213 14:58:34.947850    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/addons-975000/client.crt: no such file or directory
E0213 14:58:52.335319    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/functional-023000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-376000 --driver=qemu2 : (2m32.071351167s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-377000 --driver=qemu2 
E0213 15:00:03.217019    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt: no such file or directory
E0213 15:00:03.223362    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt: no such file or directory
E0213 15:00:03.235410    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt: no such file or directory
E0213 15:00:03.256164    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt: no such file or directory
E0213 15:00:03.298278    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt: no such file or directory
E0213 15:00:03.380380    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt: no such file or directory
E0213 15:00:03.542466    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt: no such file or directory
E0213 15:00:03.864577    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt: no such file or directory
E0213 15:00:04.506728    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-377000 --driver=qemu2 : (32.94286725s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-376000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-377000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-377000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-377000
helpers_test.go:175: Cleaning up "first-376000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-376000
E0213 15:00:05.788957    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt: no such file or directory
--- PASS: TestMinikubeProfile (185.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (5.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (5.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-504000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-504000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (96.940958ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-504000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18170
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18170-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18170-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-504000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-504000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (41.938167ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-504000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
E0213 15:15:03.207184    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt: no such file or directory
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.730452958s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.725260542s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-504000
--- PASS: TestNoKubernetes/serial/Stop (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-504000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-504000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (42.049583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-504000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-809000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-417000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-417000 -n old-k8s-version-417000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-417000 -n old-k8s-version-417000: exit status 7 (31.859ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-417000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-843000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-843000 -n no-preload-843000: exit status 7 (31.2565ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-843000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-876000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-876000 -n embed-certs-876000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-876000 -n embed-certs-876000: exit status 7 (31.07475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-876000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-066000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-066000 -n default-k8s-diff-port-066000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-066000 -n default-k8s-diff-port-066000: exit status 7 (32.36875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-066000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-330000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-330000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-330000 -n newest-cni-330000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-330000 -n newest-cni-330000: exit status 7 (29.96275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-330000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/271)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E0213 15:02:47.079169    1407 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18170-979/.minikube/profiles/ingress-addon-legacy-632000/client.crt: no such file or directory
panic.go:523: 
----------------------- debugLogs start: cilium-891000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-891000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-891000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-891000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-891000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-891000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-891000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-891000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-891000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-891000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-891000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-891000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-891000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-891000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-891000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-891000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-891000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-891000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-891000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-891000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-891000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-891000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-891000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-891000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-891000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-891000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-891000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-891000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-891000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-891000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-891000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-891000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-891000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891000"

                                                
                                                
----------------------- debugLogs end: cilium-891000 [took: 2.426748167s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-891000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-891000
--- SKIP: TestNetworkPlugins/group/cilium (2.65s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-877000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-877000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                    
Copied to clipboard