Test Report: QEMU_macOS 19529

                    
                      d7f9f66bdcb95e27f1005d5ce9d414c92a72aaf8:2024-08-28:35983
                    
                

Test fail (98/274)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.7
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 9.92
33 TestAddons/parallel/Registry 71.27
46 TestCertOptions 10.18
47 TestCertExpiration 195.41
48 TestDockerFlags 10.18
49 TestForceSystemdFlag 10.02
50 TestForceSystemdEnv 10.84
95 TestFunctional/parallel/ServiceCmdConnect 35.15
167 TestMultiControlPlane/serial/StopSecondaryNode 214.15
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 103.81
169 TestMultiControlPlane/serial/RestartSecondaryNode 208.41
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.38
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.04
174 TestMultiControlPlane/serial/StopCluster 202.09
175 TestMultiControlPlane/serial/RestartCluster 5.25
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
177 TestMultiControlPlane/serial/AddSecondaryNode 0.08
181 TestImageBuild/serial/Setup 10.26
184 TestJSONOutput/start/Command 9.89
190 TestJSONOutput/pause/Command 0.08
196 TestJSONOutput/unpause/Command 0.05
213 TestMinikubeProfile 10.06
216 TestMountStart/serial/StartWithMountFirst 10.01
219 TestMultiNode/serial/FreshStart2Nodes 9.89
220 TestMultiNode/serial/DeployApp2Nodes 77.85
221 TestMultiNode/serial/PingHostFrom2Pods 0.09
222 TestMultiNode/serial/AddNode 0.07
223 TestMultiNode/serial/MultiNodeLabels 0.06
224 TestMultiNode/serial/ProfileList 0.07
225 TestMultiNode/serial/CopyFile 0.06
226 TestMultiNode/serial/StopNode 0.14
227 TestMultiNode/serial/StartAfterStop 49.95
228 TestMultiNode/serial/RestartKeepsNodes 8.93
229 TestMultiNode/serial/DeleteNode 0.1
230 TestMultiNode/serial/StopMultiNode 2.11
231 TestMultiNode/serial/RestartMultiNode 5.25
232 TestMultiNode/serial/ValidateNameConflict 20.3
236 TestPreload 10.29
238 TestScheduledStopUnix 10.13
239 TestSkaffold 13.95
242 TestRunningBinaryUpgrade 599.86
244 TestKubernetesUpgrade 19.3
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.37
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.02
260 TestStoppedBinaryUpgrade/Upgrade 577.49
262 TestPause/serial/Start 9.92
272 TestNoKubernetes/serial/StartWithK8s 9.84
273 TestNoKubernetes/serial/StartWithStopK8s 5.3
274 TestNoKubernetes/serial/Start 5.31
278 TestNoKubernetes/serial/StartNoArgs 5.3
280 TestNetworkPlugins/group/auto/Start 9.76
281 TestNetworkPlugins/group/calico/Start 10.24
282 TestNetworkPlugins/group/custom-flannel/Start 9.88
283 TestNetworkPlugins/group/false/Start 9.9
284 TestNetworkPlugins/group/kindnet/Start 9.97
285 TestNetworkPlugins/group/flannel/Start 9.79
286 TestNetworkPlugins/group/enable-default-cni/Start 9.74
287 TestNetworkPlugins/group/bridge/Start 9.99
288 TestNetworkPlugins/group/kubenet/Start 9.83
290 TestStartStop/group/old-k8s-version/serial/FirstStart 9.77
292 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
296 TestStartStop/group/old-k8s-version/serial/SecondStart 5.22
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.09
300 TestStartStop/group/old-k8s-version/serial/Pause 0.11
302 TestStartStop/group/no-preload/serial/FirstStart 9.83
303 TestStartStop/group/no-preload/serial/DeployApp 0.09
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
307 TestStartStop/group/no-preload/serial/SecondStart 5.27
309 TestStartStop/group/embed-certs/serial/FirstStart 10.08
310 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
311 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
312 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
313 TestStartStop/group/no-preload/serial/Pause 0.1
315 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.89
316 TestStartStop/group/embed-certs/serial/DeployApp 0.09
317 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
319 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
323 TestStartStop/group/embed-certs/serial/SecondStart 5.25
325 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
326 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
327 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
328 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
329 TestStartStop/group/embed-certs/serial/Pause 0.1
331 TestStartStop/group/newest-cni/serial/FirstStart 10.11
332 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
333 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
334 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
335 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
340 TestStartStop/group/newest-cni/serial/SecondStart 5.26
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
344 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (11.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-450000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-450000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (11.693957s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3f997961-a494-47ef-bf4b-d2570804f5f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-450000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c1557d2d-1119-451c-97c3-ca1f21983f39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19529"}}
	{"specversion":"1.0","id":"5ceaaccf-ffd9-4b24-b787-16e8780b0655","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig"}}
	{"specversion":"1.0","id":"7bfa3211-ce51-497e-86b1-c42ab3d30639","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"d0aee34c-b9ce-402c-b053-670f604af75a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"da9c36da-1a25-4d40-828b-fe78a3000857","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube"}}
	{"specversion":"1.0","id":"9fb969d8-45b8-4814-a346-1a588cbf2c7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"cb929953-d0e9-4d1d-a2f9-ff516d2e60e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d2bd9469-83af-4ccc-9ccc-fe0fa8878eef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"c7106be5-2b08-44f1-ac6f-14d274ee49ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"78e3e9f2-22e6-4e66-bb74-715d013fea10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-450000\" primary control-plane node in \"download-only-450000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"74ae039f-e915-45b2-b802-4e16a7939850","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b1ab16cd-17b3-4cd5-a6f8-591439d6d3d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106f53920 0x106f53920 0x106f53920 0x106f53920 0x106f53920 0x106f53920 0x106f53920] Decompressors:map[bz2:0x140007077e0 gz:0x140007077e8 tar:0x140007076a0 tar.bz2:0x140007076b0 tar.gz:0x14000707700 tar.xz:0x14000707710 tar.zst:0x140007077c0 tbz2:0x140007076b0 tgz:0x14
000707700 txz:0x14000707710 tzst:0x140007077c0 xz:0x140007077f0 zip:0x14000707ba0 zst:0x140007077f8] Getters:map[file:0x1400061b850 http:0x14000c18230 https:0x14000c18280] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"b57b89e8-54da-4890-9b46-4973410addbf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 09:50:28.294097    1680 out.go:345] Setting OutFile to fd 1 ...
	I0828 09:50:28.294253    1680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 09:50:28.294257    1680 out.go:358] Setting ErrFile to fd 2...
	I0828 09:50:28.294259    1680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 09:50:28.294393    1680 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	W0828 09:50:28.294501    1680 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19529-1176/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19529-1176/.minikube/config/config.json: no such file or directory
	I0828 09:50:28.295720    1680 out.go:352] Setting JSON to true
	I0828 09:50:28.313012    1680 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1191,"bootTime":1724862637,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 09:50:28.313148    1680 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 09:50:28.318688    1680 out.go:97] [download-only-450000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 09:50:28.318829    1680 notify.go:220] Checking for updates...
	W0828 09:50:28.318883    1680 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball: no such file or directory
	I0828 09:50:28.321645    1680 out.go:169] MINIKUBE_LOCATION=19529
	I0828 09:50:28.324621    1680 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 09:50:28.328672    1680 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 09:50:28.332650    1680 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 09:50:28.335611    1680 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	W0828 09:50:28.341595    1680 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0828 09:50:28.341795    1680 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 09:50:28.346612    1680 out.go:97] Using the qemu2 driver based on user configuration
	I0828 09:50:28.346630    1680 start.go:297] selected driver: qemu2
	I0828 09:50:28.346644    1680 start.go:901] validating driver "qemu2" against <nil>
	I0828 09:50:28.346703    1680 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 09:50:28.349600    1680 out.go:169] Automatically selected the socket_vmnet network
	I0828 09:50:28.355557    1680 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0828 09:50:28.355648    1680 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0828 09:50:28.355726    1680 cni.go:84] Creating CNI manager for ""
	I0828 09:50:28.355744    1680 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0828 09:50:28.355796    1680 start.go:340] cluster config:
	{Name:download-only-450000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-450000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 09:50:28.361270    1680 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 09:50:28.365632    1680 out.go:97] Downloading VM boot image ...
	I0828 09:50:28.365658    1680 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso
	I0828 09:50:32.921300    1680 out.go:97] Starting "download-only-450000" primary control-plane node in "download-only-450000" cluster
	I0828 09:50:32.921319    1680 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0828 09:50:32.983800    1680 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0828 09:50:32.983807    1680 cache.go:56] Caching tarball of preloaded images
	I0828 09:50:32.983957    1680 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0828 09:50:32.988049    1680 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0828 09:50:32.988056    1680 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0828 09:50:33.135766    1680 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0828 09:50:38.681550    1680 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0828 09:50:38.682013    1680 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0828 09:50:39.378310    1680 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0828 09:50:39.378511    1680 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/download-only-450000/config.json ...
	I0828 09:50:39.378527    1680 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/download-only-450000/config.json: {Name:mkc15e7cfaa589eed2dad8ecc4d6524e9169a8ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:50:39.378763    1680 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0828 09:50:39.378946    1680 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0828 09:50:39.909720    1680 out.go:193] 
	W0828 09:50:39.917719    1680 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106f53920 0x106f53920 0x106f53920 0x106f53920 0x106f53920 0x106f53920 0x106f53920] Decompressors:map[bz2:0x140007077e0 gz:0x140007077e8 tar:0x140007076a0 tar.bz2:0x140007076b0 tar.gz:0x14000707700 tar.xz:0x14000707710 tar.zst:0x140007077c0 tbz2:0x140007076b0 tgz:0x14000707700 txz:0x14000707710 tzst:0x140007077c0 xz:0x140007077f0 zip:0x14000707ba0 zst:0x140007077f8] Getters:map[file:0x1400061b850 http:0x14000c18230 https:0x14000c18280] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0828 09:50:39.917742    1680 out_reason.go:110] 
	W0828 09:50:39.925767    1680 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 09:50:39.929627    1680 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-450000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (11.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.92s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-022000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-022000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.787260916s)

                                                
                                                
-- stdout --
	* [offline-docker-022000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-022000" primary control-plane node in "offline-docker-022000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-022000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:36:03.986376    4280 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:36:03.986531    4280 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:36:03.986535    4280 out.go:358] Setting ErrFile to fd 2...
	I0828 10:36:03.986537    4280 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:36:03.986679    4280 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:36:03.987842    4280 out.go:352] Setting JSON to false
	I0828 10:36:04.005517    4280 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3927,"bootTime":1724862636,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:36:04.005588    4280 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:36:04.011246    4280 out.go:177] * [offline-docker-022000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:36:04.018087    4280 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:36:04.018109    4280 notify.go:220] Checking for updates...
	I0828 10:36:04.026129    4280 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:36:04.029089    4280 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:36:04.032085    4280 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:36:04.035019    4280 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:36:04.038107    4280 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:36:04.041440    4280 config.go:182] Loaded profile config "multinode-223000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:36:04.041511    4280 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:36:04.045026    4280 out.go:177] * Using the qemu2 driver based on user configuration
	I0828 10:36:04.052139    4280 start.go:297] selected driver: qemu2
	I0828 10:36:04.052153    4280 start.go:901] validating driver "qemu2" against <nil>
	I0828 10:36:04.052161    4280 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:36:04.054017    4280 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 10:36:04.057006    4280 out.go:177] * Automatically selected the socket_vmnet network
	I0828 10:36:04.060110    4280 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:36:04.060153    4280 cni.go:84] Creating CNI manager for ""
	I0828 10:36:04.060159    4280 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:36:04.060163    4280 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 10:36:04.060199    4280 start.go:340] cluster config:
	{Name:offline-docker-022000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:36:04.063653    4280 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:36:04.070968    4280 out.go:177] * Starting "offline-docker-022000" primary control-plane node in "offline-docker-022000" cluster
	I0828 10:36:04.075105    4280 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:36:04.075136    4280 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:36:04.075146    4280 cache.go:56] Caching tarball of preloaded images
	I0828 10:36:04.075217    4280 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:36:04.075223    4280 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:36:04.075300    4280 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/offline-docker-022000/config.json ...
	I0828 10:36:04.075310    4280 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/offline-docker-022000/config.json: {Name:mk688fa752beb1d41359ea1416c4d9e0f0410fc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:36:04.075626    4280 start.go:360] acquireMachinesLock for offline-docker-022000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:36:04.075659    4280 start.go:364] duration metric: took 25.667µs to acquireMachinesLock for "offline-docker-022000"
	I0828 10:36:04.075670    4280 start.go:93] Provisioning new machine with config: &{Name:offline-docker-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:36:04.075695    4280 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:36:04.079081    4280 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0828 10:36:04.094849    4280 start.go:159] libmachine.API.Create for "offline-docker-022000" (driver="qemu2")
	I0828 10:36:04.094876    4280 client.go:168] LocalClient.Create starting
	I0828 10:36:04.094948    4280 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:36:04.094981    4280 main.go:141] libmachine: Decoding PEM data...
	I0828 10:36:04.094991    4280 main.go:141] libmachine: Parsing certificate...
	I0828 10:36:04.095044    4280 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:36:04.095067    4280 main.go:141] libmachine: Decoding PEM data...
	I0828 10:36:04.095074    4280 main.go:141] libmachine: Parsing certificate...
	I0828 10:36:04.095425    4280 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:36:04.255102    4280 main.go:141] libmachine: Creating SSH key...
	I0828 10:36:04.328853    4280 main.go:141] libmachine: Creating Disk image...
	I0828 10:36:04.328877    4280 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:36:04.329081    4280 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/offline-docker-022000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/offline-docker-022000/disk.qcow2
	I0828 10:36:04.345571    4280 main.go:141] libmachine: STDOUT: 
	I0828 10:36:04.345602    4280 main.go:141] libmachine: STDERR: 
	I0828 10:36:04.345663    4280 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/offline-docker-022000/disk.qcow2 +20000M
	I0828 10:36:04.354321    4280 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:36:04.354349    4280 main.go:141] libmachine: STDERR: 
	I0828 10:36:04.354361    4280 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/offline-docker-022000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/offline-docker-022000/disk.qcow2
	I0828 10:36:04.354368    4280 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:36:04.354377    4280 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:36:04.354416    4280 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/offline-docker-022000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/offline-docker-022000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/offline-docker-022000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:5a:4f:41:24:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/offline-docker-022000/disk.qcow2
	I0828 10:36:04.356314    4280 main.go:141] libmachine: STDOUT: 
	I0828 10:36:04.356331    4280 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:36:04.356351    4280 client.go:171] duration metric: took 261.479375ms to LocalClient.Create
	I0828 10:36:06.356384    4280 start.go:128] duration metric: took 2.280762792s to createHost
	I0828 10:36:06.356407    4280 start.go:83] releasing machines lock for "offline-docker-022000", held for 2.280825625s
	W0828 10:36:06.356424    4280 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:36:06.363796    4280 out.go:177] * Deleting "offline-docker-022000" in qemu2 ...
	W0828 10:36:06.381525    4280 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:36:06.381537    4280 start.go:729] Will try again in 5 seconds ...
	I0828 10:36:11.383700    4280 start.go:360] acquireMachinesLock for offline-docker-022000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:36:11.384183    4280 start.go:364] duration metric: took 362.083µs to acquireMachinesLock for "offline-docker-022000"
	I0828 10:36:11.384357    4280 start.go:93] Provisioning new machine with config: &{Name:offline-docker-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:36:11.384674    4280 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:36:11.393215    4280 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0828 10:36:11.443255    4280 start.go:159] libmachine.API.Create for "offline-docker-022000" (driver="qemu2")
	I0828 10:36:11.443311    4280 client.go:168] LocalClient.Create starting
	I0828 10:36:11.443447    4280 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:36:11.443514    4280 main.go:141] libmachine: Decoding PEM data...
	I0828 10:36:11.443530    4280 main.go:141] libmachine: Parsing certificate...
	I0828 10:36:11.443602    4280 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:36:11.443647    4280 main.go:141] libmachine: Decoding PEM data...
	I0828 10:36:11.443661    4280 main.go:141] libmachine: Parsing certificate...
	I0828 10:36:11.444176    4280 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:36:11.614834    4280 main.go:141] libmachine: Creating SSH key...
	I0828 10:36:11.671635    4280 main.go:141] libmachine: Creating Disk image...
	I0828 10:36:11.671640    4280 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:36:11.671822    4280 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/offline-docker-022000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/offline-docker-022000/disk.qcow2
	I0828 10:36:11.681410    4280 main.go:141] libmachine: STDOUT: 
	I0828 10:36:11.681432    4280 main.go:141] libmachine: STDERR: 
	I0828 10:36:11.681474    4280 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/offline-docker-022000/disk.qcow2 +20000M
	I0828 10:36:11.689303    4280 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:36:11.689322    4280 main.go:141] libmachine: STDERR: 
	I0828 10:36:11.689334    4280 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/offline-docker-022000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/offline-docker-022000/disk.qcow2
	I0828 10:36:11.689339    4280 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:36:11.689351    4280 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:36:11.689390    4280 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/offline-docker-022000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/offline-docker-022000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/offline-docker-022000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:ba:d6:78:59:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/offline-docker-022000/disk.qcow2
	I0828 10:36:11.690968    4280 main.go:141] libmachine: STDOUT: 
	I0828 10:36:11.690991    4280 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:36:11.691003    4280 client.go:171] duration metric: took 247.694417ms to LocalClient.Create
	I0828 10:36:13.693071    4280 start.go:128] duration metric: took 2.30845825s to createHost
	I0828 10:36:13.693128    4280 start.go:83] releasing machines lock for "offline-docker-022000", held for 2.309005375s
	W0828 10:36:13.693373    4280 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-022000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-022000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:36:13.709654    4280 out.go:201] 
	W0828 10:36:13.713823    4280 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:36:13.713903    4280 out.go:270] * 
	* 
	W0828 10:36:13.716763    4280 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:36:13.732698    4280 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-022000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-28 10:36:13.744262 -0700 PDT m=+2745.664253584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-022000 -n offline-docker-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-022000 -n offline-docker-022000: exit status 7 (46.977125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-022000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-022000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-022000
--- FAIL: TestOffline (9.92s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.172042ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-42k6c" [3738eb26-8c6c-4525-be41-2bb099331da6] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005266s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2p694" [c792a3f0-ab28-4331-bc5f-776dcca7e356] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010492209s
addons_test.go:342: (dbg) Run:  kubectl --context addons-793000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-793000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-793000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.060406541s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-793000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-793000 ip
2024/08/28 10:04:00 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-793000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-793000 -n addons-793000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-793000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-450000 | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT |                     |
	|         | -p download-only-450000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT | 28 Aug 24 09:50 PDT |
	| delete  | -p download-only-450000              | download-only-450000 | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT | 28 Aug 24 09:50 PDT |
	| start   | -o=json --download-only              | download-only-436000 | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT |                     |
	|         | -p download-only-436000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT | 28 Aug 24 09:50 PDT |
	| delete  | -p download-only-436000              | download-only-436000 | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT | 28 Aug 24 09:50 PDT |
	| delete  | -p download-only-450000              | download-only-450000 | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT | 28 Aug 24 09:50 PDT |
	| delete  | -p download-only-436000              | download-only-436000 | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT | 28 Aug 24 09:50 PDT |
	| start   | --download-only -p                   | binary-mirror-378000 | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT |                     |
	|         | binary-mirror-378000                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49313               |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-378000              | binary-mirror-378000 | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT | 28 Aug 24 09:50 PDT |
	| addons  | enable dashboard -p                  | addons-793000        | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT |                     |
	|         | addons-793000                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-793000        | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT |                     |
	|         | addons-793000                        |                      |         |         |                     |                     |
	| start   | -p addons-793000 --wait=true         | addons-793000        | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT | 28 Aug 24 09:54 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	| addons  | addons-793000 addons disable         | addons-793000        | jenkins | v1.33.1 | 28 Aug 24 09:54 PDT | 28 Aug 24 09:54 PDT |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-793000 addons                 | addons-793000        | jenkins | v1.33.1 | 28 Aug 24 10:03 PDT | 28 Aug 24 10:03 PDT |
	|         | disable csi-hostpath-driver          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-793000 addons                 | addons-793000        | jenkins | v1.33.1 | 28 Aug 24 10:03 PDT | 28 Aug 24 10:03 PDT |
	|         | disable volumesnapshots              |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-793000 addons                 | addons-793000        | jenkins | v1.33.1 | 28 Aug 24 10:03 PDT | 28 Aug 24 10:03 PDT |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-793000        | jenkins | v1.33.1 | 28 Aug 24 10:03 PDT | 28 Aug 24 10:03 PDT |
	|         | addons-793000                        |                      |         |         |                     |                     |
	| ssh     | addons-793000 ssh curl -s            | addons-793000        | jenkins | v1.33.1 | 28 Aug 24 10:03 PDT | 28 Aug 24 10:03 PDT |
	|         | http://127.0.0.1/ -H 'Host:          |                      |         |         |                     |                     |
	|         | nginx.example.com'                   |                      |         |         |                     |                     |
	| ip      | addons-793000 ip                     | addons-793000        | jenkins | v1.33.1 | 28 Aug 24 10:03 PDT | 28 Aug 24 10:03 PDT |
	| addons  | addons-793000 addons disable         | addons-793000        | jenkins | v1.33.1 | 28 Aug 24 10:03 PDT | 28 Aug 24 10:03 PDT |
	|         | ingress-dns --alsologtostderr        |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-793000 addons disable         | addons-793000        | jenkins | v1.33.1 | 28 Aug 24 10:03 PDT | 28 Aug 24 10:03 PDT |
	|         | ingress --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| ip      | addons-793000 ip                     | addons-793000        | jenkins | v1.33.1 | 28 Aug 24 10:04 PDT | 28 Aug 24 10:04 PDT |
	| addons  | addons-793000 addons disable         | addons-793000        | jenkins | v1.33.1 | 28 Aug 24 10:04 PDT | 28 Aug 24 10:04 PDT |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 09:50:47
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 09:50:47.557013    1757 out.go:345] Setting OutFile to fd 1 ...
	I0828 09:50:47.557160    1757 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 09:50:47.557163    1757 out.go:358] Setting ErrFile to fd 2...
	I0828 09:50:47.557165    1757 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 09:50:47.557292    1757 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 09:50:47.558372    1757 out.go:352] Setting JSON to false
	I0828 09:50:47.574731    1757 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1210,"bootTime":1724862637,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 09:50:47.574795    1757 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 09:50:47.579331    1757 out.go:177] * [addons-793000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 09:50:47.586454    1757 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 09:50:47.586492    1757 notify.go:220] Checking for updates...
	I0828 09:50:47.593383    1757 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 09:50:47.596539    1757 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 09:50:47.599444    1757 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 09:50:47.602435    1757 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 09:50:47.605413    1757 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 09:50:47.608557    1757 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 09:50:47.612369    1757 out.go:177] * Using the qemu2 driver based on user configuration
	I0828 09:50:47.619391    1757 start.go:297] selected driver: qemu2
	I0828 09:50:47.619397    1757 start.go:901] validating driver "qemu2" against <nil>
	I0828 09:50:47.619410    1757 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 09:50:47.621831    1757 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 09:50:47.624346    1757 out.go:177] * Automatically selected the socket_vmnet network
	I0828 09:50:47.627425    1757 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 09:50:47.627452    1757 cni.go:84] Creating CNI manager for ""
	I0828 09:50:47.627460    1757 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 09:50:47.627464    1757 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 09:50:47.627489    1757 start.go:340] cluster config:
	{Name:addons-793000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-793000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 09:50:47.631291    1757 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 09:50:47.639331    1757 out.go:177] * Starting "addons-793000" primary control-plane node in "addons-793000" cluster
	I0828 09:50:47.643399    1757 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 09:50:47.643412    1757 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 09:50:47.643419    1757 cache.go:56] Caching tarball of preloaded images
	I0828 09:50:47.643479    1757 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 09:50:47.643484    1757 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 09:50:47.643683    1757 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/config.json ...
	I0828 09:50:47.643694    1757 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/config.json: {Name:mkb8ca5066abd6e8d246273683d272983f470794 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:50:47.644059    1757 start.go:360] acquireMachinesLock for addons-793000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 09:50:47.644121    1757 start.go:364] duration metric: took 56.166µs to acquireMachinesLock for "addons-793000"
	I0828 09:50:47.644132    1757 start.go:93] Provisioning new machine with config: &{Name:addons-793000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:addons-793000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 09:50:47.644159    1757 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 09:50:47.652388    1757 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0828 09:50:47.893255    1757 start.go:159] libmachine.API.Create for "addons-793000" (driver="qemu2")
	I0828 09:50:47.893292    1757 client.go:168] LocalClient.Create starting
	I0828 09:50:47.893488    1757 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 09:50:47.971803    1757 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 09:50:47.999182    1757 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 09:50:48.760639    1757 main.go:141] libmachine: Creating SSH key...
	I0828 09:50:49.067762    1757 main.go:141] libmachine: Creating Disk image...
	I0828 09:50:49.067771    1757 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 09:50:49.068043    1757 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/disk.qcow2
	I0828 09:50:49.087424    1757 main.go:141] libmachine: STDOUT: 
	I0828 09:50:49.087450    1757 main.go:141] libmachine: STDERR: 
	I0828 09:50:49.087508    1757 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/disk.qcow2 +20000M
	I0828 09:50:49.095603    1757 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 09:50:49.095619    1757 main.go:141] libmachine: STDERR: 
	I0828 09:50:49.095637    1757 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/disk.qcow2
	I0828 09:50:49.095642    1757 main.go:141] libmachine: Starting QEMU VM...
	I0828 09:50:49.095670    1757 qemu.go:418] Using hvf for hardware acceleration
	I0828 09:50:49.095699    1757 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:58:7b:7e:9b:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/disk.qcow2
	I0828 09:50:49.153295    1757 main.go:141] libmachine: STDOUT: 
	I0828 09:50:49.153328    1757 main.go:141] libmachine: STDERR: 
	I0828 09:50:49.153331    1757 main.go:141] libmachine: Attempt 0
	I0828 09:50:49.153343    1757 main.go:141] libmachine: Searching for ae:58:7b:7e:9b:1f in /var/db/dhcpd_leases ...
	I0828 09:50:49.153403    1757 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0828 09:50:49.153423    1757 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66d0a6c6}
	I0828 09:50:51.155555    1757 main.go:141] libmachine: Attempt 1
	I0828 09:50:51.155675    1757 main.go:141] libmachine: Searching for ae:58:7b:7e:9b:1f in /var/db/dhcpd_leases ...
	I0828 09:50:51.156013    1757 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0828 09:50:51.156064    1757 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66d0a6c6}
	I0828 09:50:53.158416    1757 main.go:141] libmachine: Attempt 2
	I0828 09:50:53.158607    1757 main.go:141] libmachine: Searching for ae:58:7b:7e:9b:1f in /var/db/dhcpd_leases ...
	I0828 09:50:53.158907    1757 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0828 09:50:53.158970    1757 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66d0a6c6}
	I0828 09:50:55.161122    1757 main.go:141] libmachine: Attempt 3
	I0828 09:50:55.161146    1757 main.go:141] libmachine: Searching for ae:58:7b:7e:9b:1f in /var/db/dhcpd_leases ...
	I0828 09:50:55.161243    1757 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0828 09:50:55.161266    1757 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66d0a6c6}
	I0828 09:50:57.163278    1757 main.go:141] libmachine: Attempt 4
	I0828 09:50:57.163290    1757 main.go:141] libmachine: Searching for ae:58:7b:7e:9b:1f in /var/db/dhcpd_leases ...
	I0828 09:50:57.163330    1757 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0828 09:50:57.163348    1757 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66d0a6c6}
	I0828 09:50:59.165346    1757 main.go:141] libmachine: Attempt 5
	I0828 09:50:59.165358    1757 main.go:141] libmachine: Searching for ae:58:7b:7e:9b:1f in /var/db/dhcpd_leases ...
	I0828 09:50:59.165384    1757 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0828 09:50:59.165389    1757 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66d0a6c6}
	I0828 09:51:01.167435    1757 main.go:141] libmachine: Attempt 6
	I0828 09:51:01.167461    1757 main.go:141] libmachine: Searching for ae:58:7b:7e:9b:1f in /var/db/dhcpd_leases ...
	I0828 09:51:01.167542    1757 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0828 09:51:01.167554    1757 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66d0a6c6}
	I0828 09:51:03.169701    1757 main.go:141] libmachine: Attempt 7
	I0828 09:51:03.169778    1757 main.go:141] libmachine: Searching for ae:58:7b:7e:9b:1f in /var/db/dhcpd_leases ...
	I0828 09:51:03.170154    1757 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0828 09:51:03.170205    1757 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ae:58:7b:7e:9b:1f ID:1,ae:58:7b:7e:9b:1f Lease:0x66d0a6f5}
	I0828 09:51:03.170218    1757 main.go:141] libmachine: Found match: ae:58:7b:7e:9b:1f
	I0828 09:51:03.170250    1757 main.go:141] libmachine: IP: 192.168.105.2
	I0828 09:51:03.170268    1757 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0828 09:51:06.181101    1757 machine.go:93] provisionDockerMachine start ...
	I0828 09:51:06.182216    1757 main.go:141] libmachine: Using SSH client type: native
	I0828 09:51:06.182380    1757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013145a0] 0x101316e00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0828 09:51:06.182386    1757 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 09:51:06.233630    1757 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 09:51:06.233655    1757 buildroot.go:166] provisioning hostname "addons-793000"
	I0828 09:51:06.233706    1757 main.go:141] libmachine: Using SSH client type: native
	I0828 09:51:06.233846    1757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013145a0] 0x101316e00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0828 09:51:06.233853    1757 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-793000 && echo "addons-793000" | sudo tee /etc/hostname
	I0828 09:51:06.285921    1757 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-793000
	
	I0828 09:51:06.285963    1757 main.go:141] libmachine: Using SSH client type: native
	I0828 09:51:06.286073    1757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013145a0] 0x101316e00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0828 09:51:06.286082    1757 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-793000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-793000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-793000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 09:51:06.334864    1757 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 09:51:06.334878    1757 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19529-1176/.minikube CaCertPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19529-1176/.minikube}
	I0828 09:51:06.334895    1757 buildroot.go:174] setting up certificates
	I0828 09:51:06.334900    1757 provision.go:84] configureAuth start
	I0828 09:51:06.334903    1757 provision.go:143] copyHostCerts
	I0828 09:51:06.335010    1757 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19529-1176/.minikube/cert.pem (1123 bytes)
	I0828 09:51:06.335233    1757 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19529-1176/.minikube/key.pem (1679 bytes)
	I0828 09:51:06.335335    1757 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.pem (1078 bytes)
	I0828 09:51:06.335418    1757 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca-key.pem org=jenkins.addons-793000 san=[127.0.0.1 192.168.105.2 addons-793000 localhost minikube]
	I0828 09:51:06.420719    1757 provision.go:177] copyRemoteCerts
	I0828 09:51:06.420769    1757 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 09:51:06.420786    1757 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/id_rsa Username:docker}
	I0828 09:51:06.447009    1757 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 09:51:06.455560    1757 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0828 09:51:06.463996    1757 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 09:51:06.474070    1757 provision.go:87] duration metric: took 139.167041ms to configureAuth
	I0828 09:51:06.474081    1757 buildroot.go:189] setting minikube options for container-runtime
	I0828 09:51:06.474246    1757 config.go:182] Loaded profile config "addons-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 09:51:06.474280    1757 main.go:141] libmachine: Using SSH client type: native
	I0828 09:51:06.474377    1757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013145a0] 0x101316e00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0828 09:51:06.474382    1757 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0828 09:51:06.516924    1757 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0828 09:51:06.516931    1757 buildroot.go:70] root file system type: tmpfs
	I0828 09:51:06.516993    1757 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0828 09:51:06.517046    1757 main.go:141] libmachine: Using SSH client type: native
	I0828 09:51:06.517152    1757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013145a0] 0x101316e00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0828 09:51:06.517184    1757 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0828 09:51:06.565477    1757 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0828 09:51:06.565518    1757 main.go:141] libmachine: Using SSH client type: native
	I0828 09:51:06.565619    1757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013145a0] 0x101316e00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0828 09:51:06.565627    1757 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0828 09:51:07.913827    1757 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0828 09:51:07.913841    1757 machine.go:96] duration metric: took 1.732763s to provisionDockerMachine
	I0828 09:51:07.913847    1757 client.go:171] duration metric: took 20.020945542s to LocalClient.Create
	I0828 09:51:07.913859    1757 start.go:167] duration metric: took 20.021004458s to libmachine.API.Create "addons-793000"
	I0828 09:51:07.913864    1757 start.go:293] postStartSetup for "addons-793000" (driver="qemu2")
	I0828 09:51:07.913870    1757 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 09:51:07.913947    1757 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 09:51:07.913958    1757 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/id_rsa Username:docker}
	I0828 09:51:07.938838    1757 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 09:51:07.940538    1757 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 09:51:07.940546    1757 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19529-1176/.minikube/addons for local assets ...
	I0828 09:51:07.940645    1757 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19529-1176/.minikube/files for local assets ...
	I0828 09:51:07.940681    1757 start.go:296] duration metric: took 26.814625ms for postStartSetup
	I0828 09:51:07.941095    1757 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/config.json ...
	I0828 09:51:07.941293    1757 start.go:128] duration metric: took 20.297529084s to createHost
	I0828 09:51:07.941319    1757 main.go:141] libmachine: Using SSH client type: native
	I0828 09:51:07.941416    1757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1013145a0] 0x101316e00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0828 09:51:07.941421    1757 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 09:51:07.989837    1757 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724863867.914708503
	
	I0828 09:51:07.989847    1757 fix.go:216] guest clock: 1724863867.914708503
	I0828 09:51:07.989851    1757 fix.go:229] Guest: 2024-08-28 09:51:07.914708503 -0700 PDT Remote: 2024-08-28 09:51:07.941296 -0700 PDT m=+20.404061710 (delta=-26.587497ms)
	I0828 09:51:07.989862    1757 fix.go:200] guest clock delta is within tolerance: -26.587497ms
	I0828 09:51:07.989866    1757 start.go:83] releasing machines lock for "addons-793000", held for 20.346141708s
	I0828 09:51:07.990186    1757 ssh_runner.go:195] Run: cat /version.json
	I0828 09:51:07.990186    1757 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 09:51:07.990194    1757 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/id_rsa Username:docker}
	I0828 09:51:07.990216    1757 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/id_rsa Username:docker}
	I0828 09:51:08.012599    1757 ssh_runner.go:195] Run: systemctl --version
	I0828 09:51:08.014713    1757 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 09:51:08.059164    1757 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 09:51:08.059208    1757 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 09:51:08.065939    1757 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 09:51:08.065948    1757 start.go:495] detecting cgroup driver to use...
	I0828 09:51:08.066064    1757 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 09:51:08.072849    1757 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0828 09:51:08.076389    1757 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0828 09:51:08.079909    1757 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0828 09:51:08.079933    1757 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0828 09:51:08.083595    1757 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0828 09:51:08.087391    1757 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0828 09:51:08.091473    1757 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0828 09:51:08.095382    1757 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 09:51:08.099254    1757 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0828 09:51:08.103065    1757 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0828 09:51:08.106782    1757 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0828 09:51:08.110670    1757 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 09:51:08.114296    1757 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 09:51:08.117515    1757 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 09:51:08.184128    1757 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0828 09:51:08.195177    1757 start.go:495] detecting cgroup driver to use...
	I0828 09:51:08.195255    1757 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0828 09:51:08.201317    1757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 09:51:08.206799    1757 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 09:51:08.213587    1757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 09:51:08.218869    1757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0828 09:51:08.224519    1757 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0828 09:51:08.275138    1757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0828 09:51:08.281787    1757 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 09:51:08.288446    1757 ssh_runner.go:195] Run: which cri-dockerd
	I0828 09:51:08.289805    1757 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0828 09:51:08.293063    1757 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0828 09:51:08.298823    1757 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0828 09:51:08.389129    1757 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0828 09:51:08.464351    1757 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0828 09:51:08.464402    1757 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0828 09:51:08.470786    1757 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 09:51:08.537544    1757 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0828 09:51:10.727366    1757 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.189847375s)
	I0828 09:51:10.727420    1757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0828 09:51:10.733012    1757 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0828 09:51:10.739687    1757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0828 09:51:10.745002    1757 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0828 09:51:10.811827    1757 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0828 09:51:10.875385    1757 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 09:51:10.937447    1757 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0828 09:51:10.943643    1757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0828 09:51:10.949021    1757 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 09:51:11.013950    1757 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0828 09:51:11.038797    1757 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0828 09:51:11.038884    1757 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0828 09:51:11.042595    1757 start.go:563] Will wait 60s for crictl version
	I0828 09:51:11.042646    1757 ssh_runner.go:195] Run: which crictl
	I0828 09:51:11.044236    1757 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 09:51:11.064008    1757 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0828 09:51:11.064071    1757 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0828 09:51:11.075590    1757 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0828 09:51:11.099008    1757 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0828 09:51:11.099111    1757 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0828 09:51:11.100592    1757 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 09:51:11.104592    1757 kubeadm.go:883] updating cluster {Name:addons-793000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:addons-793000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 09:51:11.104639    1757 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 09:51:11.104681    1757 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0828 09:51:11.109953    1757 docker.go:685] Got preloaded images: 
	I0828 09:51:11.109961    1757 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0828 09:51:11.109996    1757 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0828 09:51:11.113397    1757 ssh_runner.go:195] Run: which lz4
	I0828 09:51:11.114684    1757 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 09:51:11.115970    1757 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 09:51:11.115982    1757 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (322549298 bytes)
	I0828 09:51:12.376765    1757 docker.go:649] duration metric: took 1.262131125s to copy over tarball
	I0828 09:51:12.376831    1757 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 09:51:13.355789    1757 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 09:51:13.370957    1757 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0828 09:51:13.374775    1757 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0828 09:51:13.380581    1757 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 09:51:13.452326    1757 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0828 09:51:16.208385    1757 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.756096917s)
	I0828 09:51:16.208484    1757 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0828 09:51:16.215894    1757 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0828 09:51:16.215904    1757 cache_images.go:84] Images are preloaded, skipping loading
	I0828 09:51:16.215909    1757 kubeadm.go:934] updating node { 192.168.105.2 8443 v1.31.0 docker true true} ...
	I0828 09:51:16.215993    1757 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-793000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-793000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 09:51:16.216049    1757 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0828 09:51:16.236384    1757 cni.go:84] Creating CNI manager for ""
	I0828 09:51:16.236395    1757 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 09:51:16.236406    1757 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 09:51:16.236416    1757 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-793000 NodeName:addons-793000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 09:51:16.236481    1757 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-793000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 09:51:16.236538    1757 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 09:51:16.240165    1757 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 09:51:16.240194    1757 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 09:51:16.243684    1757 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0828 09:51:16.249734    1757 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 09:51:16.255365    1757 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0828 09:51:16.261410    1757 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0828 09:51:16.262780    1757 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 09:51:16.267177    1757 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 09:51:16.329856    1757 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 09:51:16.337100    1757 certs.go:68] Setting up /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000 for IP: 192.168.105.2
	I0828 09:51:16.337113    1757 certs.go:194] generating shared ca certs ...
	I0828 09:51:16.337121    1757 certs.go:226] acquiring lock for ca certs: {Name:mkf861e7f19b199967d33246b8c25f60e0670f76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:16.337308    1757 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.key
	I0828 09:51:16.484181    1757 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.crt ...
	I0828 09:51:16.484194    1757 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.crt: {Name:mk1834739e78bd8434babbbb18aa8f25be2b66e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:16.484531    1757 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.key ...
	I0828 09:51:16.484535    1757 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.key: {Name:mk1a867b84eaf889d3ac9697eb22b5e369fe8c9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:16.484655    1757 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/proxy-client-ca.key
	I0828 09:51:16.587818    1757 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19529-1176/.minikube/proxy-client-ca.crt ...
	I0828 09:51:16.587829    1757 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/proxy-client-ca.crt: {Name:mk465ea46b32e09e2fa0d8518011018a3b6945df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:16.588047    1757 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19529-1176/.minikube/proxy-client-ca.key ...
	I0828 09:51:16.588056    1757 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/proxy-client-ca.key: {Name:mk52fd90ddb55196c20a5d4664b48842ce30e24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:16.588193    1757 certs.go:256] generating profile certs ...
	I0828 09:51:16.588225    1757 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.key
	I0828 09:51:16.588234    1757 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt with IP's: []
	I0828 09:51:16.695983    1757 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt ...
	I0828 09:51:16.695996    1757 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: {Name:mkcd9a8add19a3db09f6d9f622b75d22f8f5289e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:16.696194    1757 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.key ...
	I0828 09:51:16.696198    1757 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.key: {Name:mk8cc4801846a0e4d358ff090703366c8e34a0f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:16.696327    1757 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/apiserver.key.9d681d3f
	I0828 09:51:16.696338    1757 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/apiserver.crt.9d681d3f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0828 09:51:16.772229    1757 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/apiserver.crt.9d681d3f ...
	I0828 09:51:16.772238    1757 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/apiserver.crt.9d681d3f: {Name:mkd1f3b692e2b3722c5e4aa159c699b2f93db6d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:16.772387    1757 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/apiserver.key.9d681d3f ...
	I0828 09:51:16.772392    1757 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/apiserver.key.9d681d3f: {Name:mk65eaa420f46fe63bd4453e07ff60e16442f84a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:16.772507    1757 certs.go:381] copying /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/apiserver.crt.9d681d3f -> /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/apiserver.crt
	I0828 09:51:16.772772    1757 certs.go:385] copying /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/apiserver.key.9d681d3f -> /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/apiserver.key
	I0828 09:51:16.772908    1757 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/proxy-client.key
	I0828 09:51:16.772919    1757 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/proxy-client.crt with IP's: []
	I0828 09:51:16.818214    1757 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/proxy-client.crt ...
	I0828 09:51:16.818218    1757 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/proxy-client.crt: {Name:mk81926cfd19fb7b0fd5a4084ac9d2242f2dfcf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:16.818369    1757 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/proxy-client.key ...
	I0828 09:51:16.818373    1757 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/proxy-client.key: {Name:mk9dce34ecbba4009299c46c11194338ce13c0f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:16.818668    1757 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 09:51:16.818697    1757 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem (1078 bytes)
	I0828 09:51:16.818717    1757 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem (1123 bytes)
	I0828 09:51:16.818740    1757 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/key.pem (1679 bytes)
	I0828 09:51:16.819196    1757 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 09:51:16.828272    1757 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 09:51:16.836713    1757 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 09:51:16.844736    1757 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 09:51:16.852926    1757 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0828 09:51:16.861157    1757 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0828 09:51:16.869385    1757 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 09:51:16.877722    1757 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0828 09:51:16.886055    1757 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 09:51:16.894600    1757 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 09:51:16.901422    1757 ssh_runner.go:195] Run: openssl version
	I0828 09:51:16.903702    1757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 09:51:16.907679    1757 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 09:51:16.909362    1757 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:51 /usr/share/ca-certificates/minikubeCA.pem
	I0828 09:51:16.909386    1757 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 09:51:16.911383    1757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 09:51:16.915450    1757 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 09:51:16.916925    1757 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0828 09:51:16.916966    1757 kubeadm.go:392] StartCluster: {Name:addons-793000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:addons-793000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 09:51:16.917030    1757 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0828 09:51:16.922747    1757 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 09:51:16.926718    1757 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 09:51:16.930318    1757 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 09:51:16.933769    1757 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 09:51:16.933775    1757 kubeadm.go:157] found existing configuration files:
	
	I0828 09:51:16.933796    1757 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 09:51:16.937013    1757 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 09:51:16.937042    1757 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 09:51:16.940120    1757 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 09:51:16.943469    1757 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 09:51:16.943488    1757 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 09:51:16.947210    1757 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 09:51:16.950722    1757 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 09:51:16.950740    1757 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 09:51:16.954384    1757 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 09:51:16.957738    1757 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 09:51:16.957761    1757 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 09:51:16.961073    1757 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 09:51:16.983167    1757 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0828 09:51:16.983262    1757 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 09:51:17.032672    1757 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 09:51:17.032729    1757 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 09:51:17.032799    1757 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0828 09:51:17.040889    1757 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 09:51:17.054092    1757 out.go:235]   - Generating certificates and keys ...
	I0828 09:51:17.054129    1757 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 09:51:17.054193    1757 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 09:51:17.071434    1757 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0828 09:51:17.160582    1757 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0828 09:51:17.211593    1757 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0828 09:51:17.384541    1757 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0828 09:51:17.485262    1757 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0828 09:51:17.485324    1757 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-793000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0828 09:51:17.531628    1757 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0828 09:51:17.531698    1757 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-793000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0828 09:51:17.661843    1757 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0828 09:51:17.707326    1757 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0828 09:51:17.816081    1757 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0828 09:51:17.816126    1757 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 09:51:17.956495    1757 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 09:51:18.232290    1757 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0828 09:51:18.284932    1757 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 09:51:18.384266    1757 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 09:51:18.517883    1757 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 09:51:18.518125    1757 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 09:51:18.519403    1757 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 09:51:18.526705    1757 out.go:235]   - Booting up control plane ...
	I0828 09:51:18.526766    1757 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 09:51:18.526818    1757 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 09:51:18.526857    1757 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 09:51:18.529950    1757 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 09:51:18.533200    1757 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 09:51:18.533224    1757 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 09:51:18.607888    1757 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0828 09:51:18.607948    1757 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0828 09:51:19.117288    1757 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 509.016209ms
	I0828 09:51:19.117449    1757 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0828 09:51:22.621595    1757 kubeadm.go:310] [api-check] The API server is healthy after 3.504226002s
	I0828 09:51:22.642810    1757 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0828 09:51:22.652835    1757 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0828 09:51:22.667652    1757 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0828 09:51:22.667867    1757 kubeadm.go:310] [mark-control-plane] Marking the node addons-793000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0828 09:51:22.673803    1757 kubeadm.go:310] [bootstrap-token] Using token: g5bhk0.vxtkzd9vdcjwj7xv
	I0828 09:51:22.680255    1757 out.go:235]   - Configuring RBAC rules ...
	I0828 09:51:22.680336    1757 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0828 09:51:22.681490    1757 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0828 09:51:22.688218    1757 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0828 09:51:22.691568    1757 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0828 09:51:22.693181    1757 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0828 09:51:22.694489    1757 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0828 09:51:23.036603    1757 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0828 09:51:23.442021    1757 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0828 09:51:24.043883    1757 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0828 09:51:24.043923    1757 kubeadm.go:310] 
	I0828 09:51:24.044027    1757 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0828 09:51:24.044040    1757 kubeadm.go:310] 
	I0828 09:51:24.044219    1757 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0828 09:51:24.044236    1757 kubeadm.go:310] 
	I0828 09:51:24.044286    1757 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0828 09:51:24.044408    1757 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0828 09:51:24.044507    1757 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0828 09:51:24.044520    1757 kubeadm.go:310] 
	I0828 09:51:24.044644    1757 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0828 09:51:24.044662    1757 kubeadm.go:310] 
	I0828 09:51:24.044745    1757 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0828 09:51:24.044754    1757 kubeadm.go:310] 
	I0828 09:51:24.044848    1757 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0828 09:51:24.045002    1757 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0828 09:51:24.045145    1757 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0828 09:51:24.045161    1757 kubeadm.go:310] 
	I0828 09:51:24.045346    1757 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0828 09:51:24.045537    1757 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0828 09:51:24.045553    1757 kubeadm.go:310] 
	I0828 09:51:24.045727    1757 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g5bhk0.vxtkzd9vdcjwj7xv \
	I0828 09:51:24.045948    1757 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5b3c4c1f8e59fd4c25ce08db6b17ec7ac98ea4455ff93445c7a91221249d86a1 \
	I0828 09:51:24.046000    1757 kubeadm.go:310] 	--control-plane 
	I0828 09:51:24.046009    1757 kubeadm.go:310] 
	I0828 09:51:24.046195    1757 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0828 09:51:24.046212    1757 kubeadm.go:310] 
	I0828 09:51:24.046371    1757 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g5bhk0.vxtkzd9vdcjwj7xv \
	I0828 09:51:24.046570    1757 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5b3c4c1f8e59fd4c25ce08db6b17ec7ac98ea4455ff93445c7a91221249d86a1 
	I0828 09:51:24.047512    1757 kubeadm.go:310] W0828 16:51:16.907267    1589 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 09:51:24.048129    1757 kubeadm.go:310] W0828 16:51:16.907573    1589 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 09:51:24.048323    1757 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 09:51:24.048347    1757 cni.go:84] Creating CNI manager for ""
	I0828 09:51:24.048386    1757 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 09:51:24.055845    1757 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 09:51:24.058910    1757 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 09:51:24.073790    1757 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 09:51:24.091146    1757 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 09:51:24.091262    1757 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 09:51:24.091262    1757 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-793000 minikube.k8s.io/updated_at=2024_08_28T09_51_24_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=addons-793000 minikube.k8s.io/primary=true
	I0828 09:51:24.169652    1757 ops.go:34] apiserver oom_adj: -16
	I0828 09:51:24.169731    1757 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 09:51:24.671857    1757 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 09:51:25.172070    1757 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 09:51:25.671845    1757 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 09:51:26.170168    1757 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 09:51:26.671846    1757 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 09:51:27.172047    1757 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 09:51:27.671991    1757 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 09:51:28.171909    1757 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 09:51:28.671849    1757 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 09:51:28.738224    1757 kubeadm.go:1113] duration metric: took 4.647158792s to wait for elevateKubeSystemPrivileges
	I0828 09:51:28.738243    1757 kubeadm.go:394] duration metric: took 11.821511708s to StartCluster
	I0828 09:51:28.738253    1757 settings.go:142] acquiring lock: {Name:mk584f5f183a19e050e7184c0c9e70ea26430337 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:28.738423    1757 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 09:51:28.738635    1757 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/kubeconfig: {Name:mke8b729c65a2ae9e4d9042dc78e2127479f8609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:28.738900    1757 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0828 09:51:28.738930    1757 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 09:51:28.738995    1757 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0828 09:51:28.739056    1757 addons.go:69] Setting yakd=true in profile "addons-793000"
	I0828 09:51:28.739065    1757 addons.go:234] Setting addon yakd=true in "addons-793000"
	I0828 09:51:28.739064    1757 addons.go:69] Setting inspektor-gadget=true in profile "addons-793000"
	I0828 09:51:28.739075    1757 host.go:66] Checking if "addons-793000" exists ...
	I0828 09:51:28.739086    1757 addons.go:234] Setting addon inspektor-gadget=true in "addons-793000"
	I0828 09:51:28.739100    1757 host.go:66] Checking if "addons-793000" exists ...
	I0828 09:51:28.739111    1757 addons.go:69] Setting storage-provisioner=true in profile "addons-793000"
	I0828 09:51:28.739121    1757 addons.go:69] Setting cloud-spanner=true in profile "addons-793000"
	I0828 09:51:28.739128    1757 addons.go:234] Setting addon storage-provisioner=true in "addons-793000"
	I0828 09:51:28.739135    1757 addons.go:234] Setting addon cloud-spanner=true in "addons-793000"
	I0828 09:51:28.739142    1757 host.go:66] Checking if "addons-793000" exists ...
	I0828 09:51:28.739147    1757 host.go:66] Checking if "addons-793000" exists ...
	I0828 09:51:28.739176    1757 addons.go:69] Setting volcano=true in profile "addons-793000"
	I0828 09:51:28.739210    1757 addons.go:234] Setting addon volcano=true in "addons-793000"
	I0828 09:51:28.739221    1757 config.go:182] Loaded profile config "addons-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 09:51:28.739238    1757 host.go:66] Checking if "addons-793000" exists ...
	I0828 09:51:28.739330    1757 addons.go:69] Setting volumesnapshots=true in profile "addons-793000"
	I0828 09:51:28.739354    1757 addons.go:234] Setting addon volumesnapshots=true in "addons-793000"
	I0828 09:51:28.739365    1757 addons.go:69] Setting gcp-auth=true in profile "addons-793000"
	I0828 09:51:28.739377    1757 mustload.go:65] Loading cluster: addons-793000
	I0828 09:51:28.739384    1757 host.go:66] Checking if "addons-793000" exists ...
	I0828 09:51:28.739395    1757 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-793000"
	I0828 09:51:28.739406    1757 retry.go:31] will retry after 1.198528842s: connect: dial unix /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/monitor: connect: connection refused
	I0828 09:51:28.739109    1757 addons.go:69] Setting default-storageclass=true in profile "addons-793000"
	I0828 09:51:28.739434    1757 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-793000"
	I0828 09:51:28.739512    1757 retry.go:31] will retry after 1.444729036s: connect: dial unix /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/monitor: connect: connection refused
	I0828 09:51:28.739517    1757 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-793000"
	I0828 09:51:28.739528    1757 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-793000"
	I0828 09:51:28.739535    1757 host.go:66] Checking if "addons-793000" exists ...
	I0828 09:51:28.739546    1757 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-793000"
	I0828 09:51:28.739554    1757 host.go:66] Checking if "addons-793000" exists ...
	I0828 09:51:28.739552    1757 retry.go:31] will retry after 670.111523ms: connect: dial unix /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/monitor: connect: connection refused
	I0828 09:51:28.739560    1757 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-793000"
	I0828 09:51:28.739566    1757 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-793000"
	I0828 09:51:28.739649    1757 retry.go:31] will retry after 1.393251403s: connect: dial unix /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/monitor: connect: connection refused
	I0828 09:51:28.739655    1757 addons.go:69] Setting registry=true in profile "addons-793000"
	I0828 09:51:28.739660    1757 addons.go:234] Setting addon registry=true in "addons-793000"
	I0828 09:51:28.739667    1757 host.go:66] Checking if "addons-793000" exists ...
	I0828 09:51:28.739699    1757 config.go:182] Loaded profile config "addons-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 09:51:28.739739    1757 retry.go:31] will retry after 920.538049ms: connect: dial unix /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/monitor: connect: connection refused
	I0828 09:51:28.739819    1757 retry.go:31] will retry after 1.329772771s: connect: dial unix /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/monitor: connect: connection refused
	I0828 09:51:28.739824    1757 addons.go:69] Setting ingress=true in profile "addons-793000"
	I0828 09:51:28.739830    1757 addons.go:234] Setting addon ingress=true in "addons-793000"
	I0828 09:51:28.739840    1757 host.go:66] Checking if "addons-793000" exists ...
	I0828 09:51:28.739873    1757 addons.go:69] Setting ingress-dns=true in profile "addons-793000"
	I0828 09:51:28.739886    1757 addons.go:69] Setting metrics-server=true in profile "addons-793000"
	I0828 09:51:28.739892    1757 addons.go:234] Setting addon metrics-server=true in "addons-793000"
	I0828 09:51:28.739898    1757 host.go:66] Checking if "addons-793000" exists ...
	I0828 09:51:28.739898    1757 addons.go:234] Setting addon ingress-dns=true in "addons-793000"
	I0828 09:51:28.739903    1757 retry.go:31] will retry after 606.686327ms: connect: dial unix /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/monitor: connect: connection refused
	I0828 09:51:28.739912    1757 retry.go:31] will retry after 672.623681ms: connect: dial unix /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/monitor: connect: connection refused
	I0828 09:51:28.739914    1757 retry.go:31] will retry after 654.451578ms: connect: dial unix /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/monitor: connect: connection refused
	I0828 09:51:28.739928    1757 host.go:66] Checking if "addons-793000" exists ...
	I0828 09:51:28.740029    1757 retry.go:31] will retry after 654.20305ms: connect: dial unix /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/monitor: connect: connection refused
	I0828 09:51:28.739880    1757 retry.go:31] will retry after 849.856712ms: connect: dial unix /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/monitor: connect: connection refused
	I0828 09:51:28.740084    1757 retry.go:31] will retry after 1.238022374s: connect: dial unix /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/monitor: connect: connection refused
	I0828 09:51:28.740240    1757 retry.go:31] will retry after 1.056599901s: connect: dial unix /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/monitor: connect: connection refused
	I0828 09:51:28.742051    1757 out.go:177] * Verifying Kubernetes components...
	I0828 09:51:28.751759    1757 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0828 09:51:28.755950    1757 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 09:51:28.759950    1757 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0828 09:51:28.763954    1757 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0828 09:51:28.763966    1757 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0828 09:51:28.763978    1757 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/id_rsa Username:docker}
	I0828 09:51:28.766930    1757 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0828 09:51:28.766940    1757 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0828 09:51:28.766947    1757 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/id_rsa Username:docker}
	I0828 09:51:28.797234    1757 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0828 09:51:28.872003    1757 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 09:51:28.921290    1757 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0828 09:51:28.921302    1757 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0828 09:51:28.928148    1757 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0828 09:51:28.928161    1757 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0828 09:51:28.935447    1757 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0828 09:51:28.935459    1757 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0828 09:51:28.945167    1757 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0828 09:51:28.945178    1757 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0828 09:51:28.946572    1757 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0828 09:51:28.946583    1757 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0828 09:51:28.953384    1757 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0828 09:51:28.953397    1757 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0828 09:51:28.963685    1757 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0828 09:51:28.963694    1757 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0828 09:51:28.969628    1757 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0828 09:51:28.969642    1757 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0828 09:51:28.975761    1757 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0828 09:51:28.981940    1757 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0828 09:51:28.981953    1757 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0828 09:51:28.994220    1757 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0828 09:51:28.994235    1757 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0828 09:51:29.004850    1757 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0828 09:51:29.004859    1757 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0828 09:51:29.013020    1757 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0828 09:51:29.018585    1757 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0828 09:51:29.020026    1757 node_ready.go:35] waiting up to 6m0s for node "addons-793000" to be "Ready" ...
	I0828 09:51:29.026743    1757 node_ready.go:49] node "addons-793000" has status "Ready":"True"
	I0828 09:51:29.026761    1757 node_ready.go:38] duration metric: took 6.714667ms for node "addons-793000" to be "Ready" ...
	I0828 09:51:29.026776    1757 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 09:51:29.031732    1757 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-793000" in "kube-system" namespace to be "Ready" ...
	I0828 09:51:29.353941    1757 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0828 09:51:29.357935    1757 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0828 09:51:29.365851    1757 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0828 09:51:29.373894    1757 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0828 09:51:29.381911    1757 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0828 09:51:29.388919    1757 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0828 09:51:29.396354    1757 retry.go:31] will retry after 1.123251713s: connect: dial unix /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/monitor: connect: connection refused
	I0828 09:51:29.397933    1757 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0828 09:51:29.401937    1757 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0828 09:51:29.410929    1757 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0828 09:51:29.414020    1757 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-793000"
	I0828 09:51:29.414038    1757 host.go:66] Checking if "addons-793000" exists ...
	I0828 09:51:29.414830    1757 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0828 09:51:29.414837    1757 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0828 09:51:29.416388    1757 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0828 09:51:29.419395    1757 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0828 09:51:29.419403    1757 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/id_rsa Username:docker}
	I0828 09:51:29.418938    1757 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0828 09:51:29.427889    1757 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0828 09:51:29.427924    1757 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0828 09:51:29.430947    1757 out.go:177]   - Using image docker.io/busybox:stable
	I0828 09:51:29.437132    1757 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0828 09:51:29.437141    1757 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0828 09:51:29.437151    1757 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/id_rsa Username:docker}
	I0828 09:51:29.442977    1757 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0828 09:51:29.443045    1757 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0828 09:51:29.443109    1757 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0828 09:51:29.443126    1757 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/id_rsa Username:docker}
	I0828 09:51:29.449523    1757 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0828 09:51:29.449530    1757 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0828 09:51:29.449540    1757 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/id_rsa Username:docker}
	I0828 09:51:29.449815    1757 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0828 09:51:29.449822    1757 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0828 09:51:29.471946    1757 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-793000 service yakd-dashboard -n yakd-dashboard
	
	I0828 09:51:29.472530    1757 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0828 09:51:29.493164    1757 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0828 09:51:29.493177    1757 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0828 09:51:29.524441    1757 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-793000" context rescaled to 1 replicas
	I0828 09:51:29.527988    1757 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0828 09:51:29.527998    1757 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0828 09:51:29.545722    1757 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0828 09:51:29.545737    1757 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0828 09:51:29.550844    1757 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0828 09:51:29.577753    1757 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0828 09:51:29.577765    1757 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0828 09:51:29.588808    1757 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0828 09:51:29.588819    1757 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0828 09:51:29.592953    1757 out.go:177]   - Using image docker.io/registry:2.8.3
	I0828 09:51:29.594235    1757 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0828 09:51:29.596934    1757 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0828 09:51:29.596943    1757 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0828 09:51:29.596954    1757 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/id_rsa Username:docker}
	I0828 09:51:29.597243    1757 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0828 09:51:29.633570    1757 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0828 09:51:29.633584    1757 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0828 09:51:29.665995    1757 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0828 09:51:29.669927    1757 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0828 09:51:29.669937    1757 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0828 09:51:29.669950    1757 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/id_rsa Username:docker}
	I0828 09:51:29.710734    1757 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0828 09:51:29.710744    1757 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0828 09:51:29.743692    1757 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0828 09:51:29.743705    1757 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0828 09:51:29.773749    1757 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0828 09:51:29.773761    1757 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0828 09:51:29.803959    1757 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0828 09:51:29.809986    1757 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0828 09:51:29.809994    1757 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0828 09:51:29.810005    1757 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/id_rsa Username:docker}
	I0828 09:51:29.810292    1757 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0828 09:51:29.810296    1757 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0828 09:51:29.824345    1757 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0828 09:51:29.824356    1757 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0828 09:51:29.851468    1757 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0828 09:51:29.851480    1757 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0828 09:51:29.861480    1757 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0828 09:51:29.863194    1757 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0828 09:51:29.863201    1757 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0828 09:51:29.880085    1757 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0828 09:51:29.880099    1757 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0828 09:51:29.928319    1757 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0828 09:51:29.944939    1757 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 09:51:29.948933    1757 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 09:51:29.948940    1757 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 09:51:29.948950    1757 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/id_rsa Username:docker}
	I0828 09:51:29.982929    1757 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0828 09:51:29.983557    1757 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0828 09:51:29.984040    1757 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 09:51:29.984046    1757 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 09:51:29.984053    1757 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/id_rsa Username:docker}
	I0828 09:51:30.007417    1757 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0828 09:51:30.007436    1757 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0828 09:51:30.021064    1757 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 09:51:30.070177    1757 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 09:51:30.070188    1757 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0828 09:51:30.070520    1757 host.go:66] Checking if "addons-793000" exists ...
	I0828 09:51:30.111665    1757 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 09:51:30.122521    1757 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 09:51:30.122534    1757 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0828 09:51:30.135774    1757 addons.go:234] Setting addon default-storageclass=true in "addons-793000"
	I0828 09:51:30.135793    1757 host.go:66] Checking if "addons-793000" exists ...
	I0828 09:51:30.136342    1757 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 09:51:30.136349    1757 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 09:51:30.136354    1757 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/id_rsa Username:docker}
	I0828 09:51:30.182129    1757 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 09:51:30.182140    1757 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 09:51:30.188580    1757 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0828 09:51:30.192480    1757 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0828 09:51:30.192491    1757 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0828 09:51:30.192501    1757 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/id_rsa Username:docker}
	I0828 09:51:30.257791    1757 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 09:51:30.257803    1757 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 09:51:30.338756    1757 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 09:51:30.347688    1757 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 09:51:30.393799    1757 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0828 09:51:30.525854    1757 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0828 09:51:30.530061    1757 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0828 09:51:30.530070    1757 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0828 09:51:30.530081    1757 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/id_rsa Username:docker}
	I0828 09:51:30.802548    1757 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0828 09:51:31.048376    1757 pod_ready.go:103] pod "etcd-addons-793000" in "kube-system" namespace has status "Ready":"False"
	I0828 09:51:31.819696    1757 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (2.268878125s)
	I0828 09:51:31.819805    1757 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (2.347312583s)
	I0828 09:51:31.819827    1757 addons.go:475] Verifying addon ingress=true in "addons-793000"
	I0828 09:51:31.824021    1757 out.go:177] * Verifying ingress addon...
	I0828 09:51:31.833433    1757 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0828 09:51:31.870865    1757 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0828 09:51:31.870874    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:32.344458    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:32.870043    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:33.048563    1757 pod_ready.go:103] pod "etcd-addons-793000" in "kube-system" namespace has status "Ready":"False"
	I0828 09:51:33.208319    1757 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.611133917s)
	I0828 09:51:33.364556    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:33.401741    1757 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.540311791s)
	I0828 09:51:33.401751    1757 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.47348175s)
	I0828 09:51:33.401761    1757 addons.go:475] Verifying addon registry=true in "addons-793000"
	I0828 09:51:33.401761    1757 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-793000"
	I0828 09:51:33.401801    1757 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.418303416s)
	I0828 09:51:33.401822    1757 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.380817459s)
	I0828 09:51:33.401860    1757 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.290242458s)
	W0828 09:51:33.402489    1757 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0828 09:51:33.401873    1757 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.063169042s)
	I0828 09:51:33.401903    1757 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.054265833s)
	I0828 09:51:33.402594    1757 addons.go:475] Verifying addon metrics-server=true in "addons-793000"
	I0828 09:51:33.401911    1757 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.008161167s)
	I0828 09:51:33.401917    1757 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.599408084s)
	I0828 09:51:33.402502    1757 retry.go:31] will retry after 331.841317ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0828 09:51:33.405044    1757 out.go:177] * Verifying registry addon...
	I0828 09:51:33.412957    1757 out.go:177] * Verifying csi-hostpath-driver addon...
	I0828 09:51:33.423415    1757 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0828 09:51:33.429404    1757 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0828 09:51:33.470068    1757 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0828 09:51:33.470078    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:33.470248    1757 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0828 09:51:33.470253    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:33.735767    1757 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 09:51:33.837933    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:33.937677    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:33.937908    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:34.337023    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:34.437692    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:34.438143    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:34.837698    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:34.937324    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:34.937925    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:35.337761    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:35.427214    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:35.431748    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:35.536263    1757 pod_ready.go:103] pod "etcd-addons-793000" in "kube-system" namespace has status "Ready":"False"
	I0828 09:51:35.837928    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:35.940450    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:35.940716    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:36.079946    1757 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.344198708s)
	I0828 09:51:36.338139    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:36.427596    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:36.432737    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:36.538213    1757 pod_ready.go:93] pod "etcd-addons-793000" in "kube-system" namespace has status "Ready":"True"
	I0828 09:51:36.538233    1757 pod_ready.go:82] duration metric: took 7.506635166s for pod "etcd-addons-793000" in "kube-system" namespace to be "Ready" ...
	I0828 09:51:36.538244    1757 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-793000" in "kube-system" namespace to be "Ready" ...
	I0828 09:51:36.542454    1757 pod_ready.go:93] pod "kube-apiserver-addons-793000" in "kube-system" namespace has status "Ready":"True"
	I0828 09:51:36.542464    1757 pod_ready.go:82] duration metric: took 4.213709ms for pod "kube-apiserver-addons-793000" in "kube-system" namespace to be "Ready" ...
	I0828 09:51:36.542471    1757 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-793000" in "kube-system" namespace to be "Ready" ...
	I0828 09:51:36.840153    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:36.938401    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:36.938743    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:37.337425    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:37.427465    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:37.432270    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:37.838160    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:37.938305    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:37.938563    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:38.277110    1757 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0828 09:51:38.277128    1757 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/id_rsa Username:docker}
	I0828 09:51:38.306755    1757 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0828 09:51:38.315789    1757 addons.go:234] Setting addon gcp-auth=true in "addons-793000"
	I0828 09:51:38.315815    1757 host.go:66] Checking if "addons-793000" exists ...
	I0828 09:51:38.316506    1757 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0828 09:51:38.316518    1757 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/addons-793000/id_rsa Username:docker}
	I0828 09:51:38.337506    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:38.342940    1757 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0828 09:51:38.346948    1757 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0828 09:51:38.352928    1757 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0828 09:51:38.352936    1757 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0828 09:51:38.359508    1757 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0828 09:51:38.359515    1757 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0828 09:51:38.367429    1757 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0828 09:51:38.367439    1757 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0828 09:51:38.374131    1757 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0828 09:51:38.426265    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:38.432456    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:38.551374    1757 pod_ready.go:103] pod "kube-controller-manager-addons-793000" in "kube-system" namespace has status "Ready":"False"
	I0828 09:51:38.639156    1757 addons.go:475] Verifying addon gcp-auth=true in "addons-793000"
	I0828 09:51:38.642772    1757 out.go:177] * Verifying gcp-auth addon...
	I0828 09:51:38.650000    1757 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0828 09:51:38.651264    1757 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0828 09:51:38.939092    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:38.939238    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:38.939391    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:39.046658    1757 pod_ready.go:93] pod "kube-controller-manager-addons-793000" in "kube-system" namespace has status "Ready":"True"
	I0828 09:51:39.046671    1757 pod_ready.go:82] duration metric: took 2.504244208s for pod "kube-controller-manager-addons-793000" in "kube-system" namespace to be "Ready" ...
	I0828 09:51:39.046675    1757 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gdrtb" in "kube-system" namespace to be "Ready" ...
	I0828 09:51:39.048473    1757 pod_ready.go:93] pod "kube-proxy-gdrtb" in "kube-system" namespace has status "Ready":"True"
	I0828 09:51:39.048481    1757 pod_ready.go:82] duration metric: took 1.803459ms for pod "kube-proxy-gdrtb" in "kube-system" namespace to be "Ready" ...
	I0828 09:51:39.048485    1757 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-793000" in "kube-system" namespace to be "Ready" ...
	I0828 09:51:39.050051    1757 pod_ready.go:93] pod "kube-scheduler-addons-793000" in "kube-system" namespace has status "Ready":"True"
	I0828 09:51:39.050056    1757 pod_ready.go:82] duration metric: took 1.567833ms for pod "kube-scheduler-addons-793000" in "kube-system" namespace to be "Ready" ...
	I0828 09:51:39.050059    1757 pod_ready.go:39] duration metric: took 10.023475459s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 09:51:39.050069    1757 api_server.go:52] waiting for apiserver process to appear ...
	I0828 09:51:39.050117    1757 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 09:51:39.056571    1757 api_server.go:72] duration metric: took 10.317833916s to wait for apiserver process to appear ...
	I0828 09:51:39.056581    1757 api_server.go:88] waiting for apiserver healthz status ...
	I0828 09:51:39.056587    1757 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0828 09:51:39.058961    1757 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0828 09:51:39.059424    1757 api_server.go:141] control plane version: v1.31.0
	I0828 09:51:39.059433    1757 api_server.go:131] duration metric: took 2.850084ms to wait for apiserver health ...
	I0828 09:51:39.059436    1757 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 09:51:39.064162    1757 system_pods.go:59] 17 kube-system pods found
	I0828 09:51:39.064174    1757 system_pods.go:61] "coredns-6f6b679f8f-9d5s6" [db50f136-e800-400f-9eb4-b67d8238f6fb] Running
	I0828 09:51:39.064178    1757 system_pods.go:61] "csi-hostpath-attacher-0" [6e1c8fea-eb7d-4620-a9bf-2c61e88dc4ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0828 09:51:39.064181    1757 system_pods.go:61] "csi-hostpath-resizer-0" [b63220b6-d781-4e28-99f3-5c501293dfe8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0828 09:51:39.064184    1757 system_pods.go:61] "csi-hostpathplugin-7f4r5" [dcba93b5-8ee1-4e7c-b967-748ed4174a96] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0828 09:51:39.064186    1757 system_pods.go:61] "etcd-addons-793000" [e359be7d-e6cb-48ab-8fe3-04baa643f2be] Running
	I0828 09:51:39.064189    1757 system_pods.go:61] "kube-apiserver-addons-793000" [0a804f71-9e4d-4f4f-b022-1705b7310de6] Running
	I0828 09:51:39.064191    1757 system_pods.go:61] "kube-controller-manager-addons-793000" [3bb957e6-ee01-4281-97b6-ac29c442de61] Running
	I0828 09:51:39.064195    1757 system_pods.go:61] "kube-ingress-dns-minikube" [e5e5c8d8-acf1-4924-a09b-25c7422f25c9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0828 09:51:39.064197    1757 system_pods.go:61] "kube-proxy-gdrtb" [e5e8fd9d-5b49-4a93-b693-de0b8713cef5] Running
	I0828 09:51:39.064199    1757 system_pods.go:61] "kube-scheduler-addons-793000" [c3a9cc2a-f01c-427b-94e9-6a565a0eff62] Running
	I0828 09:51:39.064202    1757 system_pods.go:61] "metrics-server-84c5f94fbc-4bnvk" [7c0760bb-1ad8-4130-b346-cd4764ad0de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 09:51:39.064205    1757 system_pods.go:61] "nvidia-device-plugin-daemonset-lqcnw" [8df0fa9c-1783-42d0-bf14-de8210a636d7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0828 09:51:39.064209    1757 system_pods.go:61] "registry-6fb4cdfc84-42k6c" [3738eb26-8c6c-4525-be41-2bb099331da6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0828 09:51:39.064211    1757 system_pods.go:61] "registry-proxy-2p694" [c792a3f0-ab28-4331-bc5f-776dcca7e356] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0828 09:51:39.064214    1757 system_pods.go:61] "snapshot-controller-56fcc65765-b6szp" [5096ca9b-79d5-4b14-879a-fcb526d332b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 09:51:39.064217    1757 system_pods.go:61] "snapshot-controller-56fcc65765-r82f2" [cb2c7dde-c4cc-499c-958d-82eabf6ffbdc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 09:51:39.064220    1757 system_pods.go:61] "storage-provisioner" [e92f4c88-61d7-491e-b68e-295b8475139b] Running
	I0828 09:51:39.064223    1757 system_pods.go:74] duration metric: took 4.784667ms to wait for pod list to return data ...
	I0828 09:51:39.064226    1757 default_sa.go:34] waiting for default service account to be created ...
	I0828 09:51:39.065170    1757 default_sa.go:45] found service account: "default"
	I0828 09:51:39.065175    1757 default_sa.go:55] duration metric: took 946.084µs for default service account to be created ...
	I0828 09:51:39.065178    1757 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 09:51:39.140426    1757 system_pods.go:86] 17 kube-system pods found
	I0828 09:51:39.140439    1757 system_pods.go:89] "coredns-6f6b679f8f-9d5s6" [db50f136-e800-400f-9eb4-b67d8238f6fb] Running
	I0828 09:51:39.140443    1757 system_pods.go:89] "csi-hostpath-attacher-0" [6e1c8fea-eb7d-4620-a9bf-2c61e88dc4ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0828 09:51:39.140446    1757 system_pods.go:89] "csi-hostpath-resizer-0" [b63220b6-d781-4e28-99f3-5c501293dfe8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0828 09:51:39.140450    1757 system_pods.go:89] "csi-hostpathplugin-7f4r5" [dcba93b5-8ee1-4e7c-b967-748ed4174a96] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0828 09:51:39.140452    1757 system_pods.go:89] "etcd-addons-793000" [e359be7d-e6cb-48ab-8fe3-04baa643f2be] Running
	I0828 09:51:39.140455    1757 system_pods.go:89] "kube-apiserver-addons-793000" [0a804f71-9e4d-4f4f-b022-1705b7310de6] Running
	I0828 09:51:39.140457    1757 system_pods.go:89] "kube-controller-manager-addons-793000" [3bb957e6-ee01-4281-97b6-ac29c442de61] Running
	I0828 09:51:39.140460    1757 system_pods.go:89] "kube-ingress-dns-minikube" [e5e5c8d8-acf1-4924-a09b-25c7422f25c9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0828 09:51:39.140461    1757 system_pods.go:89] "kube-proxy-gdrtb" [e5e8fd9d-5b49-4a93-b693-de0b8713cef5] Running
	I0828 09:51:39.140464    1757 system_pods.go:89] "kube-scheduler-addons-793000" [c3a9cc2a-f01c-427b-94e9-6a565a0eff62] Running
	I0828 09:51:39.140467    1757 system_pods.go:89] "metrics-server-84c5f94fbc-4bnvk" [7c0760bb-1ad8-4130-b346-cd4764ad0de8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 09:51:39.140470    1757 system_pods.go:89] "nvidia-device-plugin-daemonset-lqcnw" [8df0fa9c-1783-42d0-bf14-de8210a636d7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0828 09:51:39.140473    1757 system_pods.go:89] "registry-6fb4cdfc84-42k6c" [3738eb26-8c6c-4525-be41-2bb099331da6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0828 09:51:39.140477    1757 system_pods.go:89] "registry-proxy-2p694" [c792a3f0-ab28-4331-bc5f-776dcca7e356] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0828 09:51:39.140480    1757 system_pods.go:89] "snapshot-controller-56fcc65765-b6szp" [5096ca9b-79d5-4b14-879a-fcb526d332b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 09:51:39.140483    1757 system_pods.go:89] "snapshot-controller-56fcc65765-r82f2" [cb2c7dde-c4cc-499c-958d-82eabf6ffbdc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 09:51:39.140485    1757 system_pods.go:89] "storage-provisioner" [e92f4c88-61d7-491e-b68e-295b8475139b] Running
	I0828 09:51:39.140489    1757 system_pods.go:126] duration metric: took 75.286375ms to wait for k8s-apps to be running ...
	I0828 09:51:39.140493    1757 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 09:51:39.140556    1757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 09:51:39.146949    1757 system_svc.go:56] duration metric: took 6.453625ms WaitForService to wait for kubelet
	I0828 09:51:39.146958    1757 kubeadm.go:582] duration metric: took 10.408222125s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 09:51:39.146967    1757 node_conditions.go:102] verifying NodePressure condition ...
	I0828 09:51:39.337546    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:39.337697    1757 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 09:51:39.337705    1757 node_conditions.go:123] node cpu capacity is 2
	I0828 09:51:39.337712    1757 node_conditions.go:105] duration metric: took 190.745417ms to run NodePressure ...
	I0828 09:51:39.337719    1757 start.go:241] waiting for startup goroutines ...
	I0828 09:51:39.438566    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:39.438827    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:39.837900    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:39.927826    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:40.029222    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:40.339259    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:40.441079    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:40.442685    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:40.837362    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:40.926954    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:40.931756    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:41.337328    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:41.426054    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:41.432845    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:41.837330    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:41.937379    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:41.937588    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:42.337297    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:42.425610    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:42.431943    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:42.838059    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:42.938137    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:42.938724    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:43.337600    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:43.426726    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:43.431595    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:43.839892    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:43.929381    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:43.933735    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:44.339616    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:44.428511    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:44.433019    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:44.844322    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:44.929003    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:44.933825    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:45.337339    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:45.427278    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:45.432042    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:45.835569    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:45.927158    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:45.931880    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:46.337401    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:46.427161    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:46.431644    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:46.837499    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:46.926922    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:46.932926    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:47.337447    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:47.426960    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:47.431439    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:47.837332    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:47.926790    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:47.931538    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:48.596811    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:48.596875    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:48.596879    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:48.837327    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:48.937638    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:51:48.938036    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:49.338838    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:49.427784    1757 kapi.go:107] duration metric: took 16.004678167s to wait for kubernetes.io/minikube-addons=registry ...
	I0828 09:51:49.432464    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:49.842895    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:49.936804    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:50.339613    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:50.438973    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:50.837153    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:50.933396    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:51.337072    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:51.433425    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:51.837385    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:51.933355    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:52.337068    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:52.433400    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:52.836983    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:52.931901    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:53.337229    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:53.433177    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:53.837254    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:53.933473    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:54.337500    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:54.438242    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:54.837304    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:54.933423    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:55.337004    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:55.433313    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:55.837446    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:55.934924    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:56.335783    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:56.433401    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:56.837273    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:56.933303    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:57.336400    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:57.432584    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:57.835864    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:57.933214    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:58.335318    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:58.433192    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:58.837126    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:58.933439    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:59.337166    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:59.432496    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:51:59.836973    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:51:59.933268    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:00.337114    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:00.499306    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:00.838232    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:00.937419    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:01.339786    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:01.435776    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:01.840317    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:01.937377    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:02.337052    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:02.432099    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:02.838053    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:02.934084    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:03.337119    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:03.433707    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:03.836725    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:03.933123    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:04.337679    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:04.438998    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:04.837397    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:04.934338    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:05.342355    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:05.443692    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:05.842310    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:05.937364    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:06.336891    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:06.433049    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:06.836974    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:06.933085    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:07.337025    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:07.437256    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:07.836716    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:07.932709    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:08.336649    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:08.434662    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:08.837459    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:08.932442    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:09.336817    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:09.433039    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:09.836961    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:09.933259    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:10.337485    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:10.434309    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:10.839029    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:10.934994    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:11.336663    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:11.433272    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:11.836937    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:11.933415    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:12.337200    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:12.433733    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:12.836947    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:12.932717    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:13.337134    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:13.433503    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:13.837026    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:13.932880    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:14.336895    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:14.433070    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:14.836884    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:14.937350    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:15.335909    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:15.433952    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:15.842910    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:15.940094    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:16.335976    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:16.432844    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:16.837070    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:16.933208    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:17.336760    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:17.433270    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:17.837737    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:17.933926    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:18.336701    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:18.432421    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:18.836708    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:18.932737    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:19.336648    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:19.433264    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:19.836677    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:19.932948    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:20.337071    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:20.433675    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:20.838044    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:20.936284    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:21.338419    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:21.431930    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:21.836766    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:21.932942    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:22.336620    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:22.432605    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:22.836533    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:22.937406    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:23.336597    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:23.432857    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:23.845250    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:23.936302    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:24.335608    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:24.432922    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:24.836654    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:24.933091    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:25.336675    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:25.436842    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:25.837922    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:25.939787    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:26.335990    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:26.432346    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:26.837305    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:26.932640    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:27.336726    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:27.437023    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:27.844197    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:27.941538    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:28.337319    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:28.434953    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:28.838054    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:28.935171    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:29.336795    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:29.432417    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:29.836151    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:29.932383    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:30.336318    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:30.432680    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:30.836357    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:30.932893    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:31.336781    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:31.433736    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:31.838823    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:31.935842    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:32.336482    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:32.432902    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:32.836376    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:32.931066    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:33.336259    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:33.432609    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:33.837261    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:33.935869    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:34.345048    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:34.434161    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:34.838506    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:34.936036    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:35.336576    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:35.432442    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:35.836748    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:35.933395    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:36.336282    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:36.432663    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:36.836137    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:36.937164    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:37.337389    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:37.430694    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:37.837962    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:37.937150    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:38.336763    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:38.433438    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:38.836459    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:38.932498    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:39.336062    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:39.437454    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:39.842426    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:39.939037    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:40.342429    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:40.439499    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:40.840222    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:40.933881    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:41.335139    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:41.432971    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:41.836060    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:41.932587    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:42.336082    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:42.432835    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:42.835955    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:42.930929    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:43.336530    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:43.430758    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:43.836266    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:43.933509    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:44.336163    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:44.432123    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:44.834363    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:44.933136    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:45.336450    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:45.432444    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:45.836027    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:45.932494    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:46.336242    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:46.432351    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:46.836325    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:46.932636    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:47.336137    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:47.431701    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:47.836136    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:47.932321    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:48.336132    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:48.441406    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:48.836181    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:48.937098    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:49.339578    1757 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:49.434222    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:49.836473    1757 kapi.go:107] duration metric: took 1m18.004577209s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0828 09:52:49.934042    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:50.432512    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:50.932319    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:51.432802    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:51.934078    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:52.435864    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:52.933941    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:53.432428    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:53.933097    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:54.434492    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:54.934795    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:55.435111    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:55.932504    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:56.432672    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:56.930565    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:57.432746    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:57.933080    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:58.432507    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:58.931572    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:59.432103    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:59.932152    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:00.432405    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:00.650128    1757 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0828 09:53:00.650139    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:00.931582    1757 kapi.go:107] duration metric: took 1m27.503902042s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0828 09:53:01.152662    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:01.653328    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:02.160073    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:02.657239    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:03.166610    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:03.652201    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:04.153442    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:04.658633    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:05.157883    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:05.655258    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:06.153927    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:06.655315    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:07.153455    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:07.658280    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:08.153573    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:08.654133    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:09.156969    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:09.654983    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:10.154833    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:10.652669    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:11.156825    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:11.659400    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:12.155093    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:12.655622    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:13.153046    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:13.650520    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:14.155984    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:14.655956    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:15.156679    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:15.657482    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:16.151373    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:16.658172    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:17.157146    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:17.656431    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:18.153118    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:18.657268    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:19.151991    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:19.654012    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:20.153176    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:20.657420    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:21.153767    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:21.653586    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:22.158133    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:22.657067    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:23.159214    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:23.651703    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:24.155974    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:24.655822    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:25.153472    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:25.652011    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:26.152168    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:26.657798    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:27.156073    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:27.654884    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:28.155710    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:28.654711    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:29.154059    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:29.651292    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:30.157552    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:30.659242    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:31.153558    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:31.653707    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:32.155792    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:32.653392    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:33.155421    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:33.652862    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:34.155554    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:34.655114    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:35.152656    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:35.653472    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:36.154796    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:36.656297    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:37.153207    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:37.653308    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:38.156661    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:38.658401    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:39.156775    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:39.659085    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:40.152862    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:40.656753    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:41.157110    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:41.655154    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:42.151393    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:42.651259    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:43.154433    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:43.651934    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:44.152536    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:44.651162    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:45.151859    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:45.653009    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:46.152647    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:46.655668    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:47.155748    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:47.654505    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:48.155180    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:48.652982    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:49.153009    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:49.656681    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:50.153361    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:50.655840    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:51.153285    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:51.653973    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:52.155487    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:52.652238    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:53.152563    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:53.652729    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:54.157233    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:54.654495    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:55.157922    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:55.657321    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:56.154081    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:56.655900    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:57.155266    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:57.657239    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:58.152961    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:58.655037    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:59.153264    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:59.655308    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:00.153865    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:00.653225    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:01.152435    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:01.658601    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:02.151457    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:02.651361    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:03.154884    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:03.652271    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:04.153817    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:04.651811    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:05.150726    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:05.650009    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:06.152171    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:06.654161    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:07.150834    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:07.650850    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:08.150453    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:08.650657    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:09.151081    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:09.650949    1757 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:10.150513    1757 kapi.go:107] duration metric: took 2m31.503497875s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0828 09:54:10.154518    1757 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-793000 cluster.
	I0828 09:54:10.157579    1757 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0828 09:54:10.162428    1757 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0828 09:54:10.165530    1757 out.go:177] * Enabled addons: inspektor-gadget, yakd, storage-provisioner-rancher, volcano, ingress-dns, storage-provisioner, metrics-server, cloud-spanner, nvidia-device-plugin, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0828 09:54:10.169459    1757 addons.go:510] duration metric: took 2m41.433684833s for enable addons: enabled=[inspektor-gadget yakd storage-provisioner-rancher volcano ingress-dns storage-provisioner metrics-server cloud-spanner nvidia-device-plugin default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0828 09:54:10.169479    1757 start.go:246] waiting for cluster config update ...
	I0828 09:54:10.169488    1757 start.go:255] writing updated cluster config ...
	I0828 09:54:10.170092    1757 ssh_runner.go:195] Run: rm -f paused
	I0828 09:54:10.324404    1757 start.go:600] kubectl: 1.29.2, cluster: 1.31.0 (minor skew: 2)
	I0828 09:54:10.328511    1757 out.go:201] 
	W0828 09:54:10.332469    1757 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0.
	I0828 09:54:10.336503    1757 out.go:177]   - Want kubectl v1.31.0? Try 'minikube kubectl -- get pods -A'
	I0828 09:54:10.348491    1757 out.go:177] * Done! kubectl is now configured to use "addons-793000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 28 17:04:00 addons-793000 dockerd[1286]: time="2024-08-28T17:04:00.250405379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 28 17:04:00 addons-793000 dockerd[1286]: time="2024-08-28T17:04:00.250421780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 28 17:04:00 addons-793000 dockerd[1286]: time="2024-08-28T17:04:00.250514233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 28 17:04:00 addons-793000 dockerd[1286]: time="2024-08-28T17:04:00.266140273Z" level=info msg="shim disconnected" id=dc6139e8541c28826c6a2eb28b48791f614dd0d857e68efa0907ebef4af47e59 namespace=moby
	Aug 28 17:04:00 addons-793000 dockerd[1286]: time="2024-08-28T17:04:00.266239553Z" level=warning msg="cleaning up after shim disconnected" id=dc6139e8541c28826c6a2eb28b48791f614dd0d857e68efa0907ebef4af47e59 namespace=moby
	Aug 28 17:04:00 addons-793000 dockerd[1286]: time="2024-08-28T17:04:00.266256703Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 28 17:04:00 addons-793000 dockerd[1279]: time="2024-08-28T17:04:00.266251666Z" level=info msg="ignoring event" container=dc6139e8541c28826c6a2eb28b48791f614dd0d857e68efa0907ebef4af47e59 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:04:00 addons-793000 cri-dockerd[1175]: time="2024-08-28T17:04:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2786b461ac88edd8772b4b8d58d81f9025db4faf98ed4d9c771be00f77bc99be/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 28 17:04:00 addons-793000 dockerd[1279]: time="2024-08-28T17:04:00.415280362Z" level=info msg="ignoring event" container=6ba5dfed56857308e614cc1032e38087084559b2a2b623dff8ec11c524b2e8a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:04:00 addons-793000 dockerd[1286]: time="2024-08-28T17:04:00.415813226Z" level=info msg="shim disconnected" id=6ba5dfed56857308e614cc1032e38087084559b2a2b623dff8ec11c524b2e8a1 namespace=moby
	Aug 28 17:04:00 addons-793000 dockerd[1286]: time="2024-08-28T17:04:00.416043756Z" level=warning msg="cleaning up after shim disconnected" id=6ba5dfed56857308e614cc1032e38087084559b2a2b623dff8ec11c524b2e8a1 namespace=moby
	Aug 28 17:04:00 addons-793000 dockerd[1286]: time="2024-08-28T17:04:00.416169261Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 28 17:04:00 addons-793000 dockerd[1286]: time="2024-08-28T17:04:00.451396409Z" level=info msg="shim disconnected" id=5d80d2e517f3f82690e09428c6ef52c03d0e96955841ad1dfb2e0159cab50ad1 namespace=moby
	Aug 28 17:04:00 addons-793000 dockerd[1286]: time="2024-08-28T17:04:00.451427421Z" level=warning msg="cleaning up after shim disconnected" id=5d80d2e517f3f82690e09428c6ef52c03d0e96955841ad1dfb2e0159cab50ad1 namespace=moby
	Aug 28 17:04:00 addons-793000 dockerd[1286]: time="2024-08-28T17:04:00.451431959Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 28 17:04:00 addons-793000 dockerd[1279]: time="2024-08-28T17:04:00.451506179Z" level=info msg="ignoring event" container=5d80d2e517f3f82690e09428c6ef52c03d0e96955841ad1dfb2e0159cab50ad1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:04:00 addons-793000 dockerd[1279]: time="2024-08-28T17:04:00.491162379Z" level=info msg="ignoring event" container=1cac72c8f0079704a14e9c6de4b6a1cf2e36a7a7069fabbc0767e4fb3b3d6ef1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:04:00 addons-793000 dockerd[1286]: time="2024-08-28T17:04:00.491459303Z" level=info msg="shim disconnected" id=1cac72c8f0079704a14e9c6de4b6a1cf2e36a7a7069fabbc0767e4fb3b3d6ef1 namespace=moby
	Aug 28 17:04:00 addons-793000 dockerd[1286]: time="2024-08-28T17:04:00.491509130Z" level=warning msg="cleaning up after shim disconnected" id=1cac72c8f0079704a14e9c6de4b6a1cf2e36a7a7069fabbc0767e4fb3b3d6ef1 namespace=moby
	Aug 28 17:04:00 addons-793000 dockerd[1286]: time="2024-08-28T17:04:00.491525781Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 28 17:04:00 addons-793000 dockerd[1279]: time="2024-08-28T17:04:00.552633402Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Aug 28 17:04:00 addons-793000 dockerd[1286]: time="2024-08-28T17:04:00.561439346Z" level=info msg="shim disconnected" id=35fb3c51e1f9eed34e9976ac1df41d6f0c3712e251602895dd635804ab0cbc01 namespace=moby
	Aug 28 17:04:00 addons-793000 dockerd[1286]: time="2024-08-28T17:04:00.561468985Z" level=warning msg="cleaning up after shim disconnected" id=35fb3c51e1f9eed34e9976ac1df41d6f0c3712e251602895dd635804ab0cbc01 namespace=moby
	Aug 28 17:04:00 addons-793000 dockerd[1286]: time="2024-08-28T17:04:00.561473230Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 28 17:04:00 addons-793000 dockerd[1279]: time="2024-08-28T17:04:00.561563436Z" level=info msg="ignoring event" container=35fb3c51e1f9eed34e9976ac1df41d6f0c3712e251602895dd635804ab0cbc01 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	3dc7a8850eadc       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  7 seconds ago       Running             hello-world-app            0                   cf85d9519c0d5       hello-world-app-55bf9c44b4-qkxfn
	42beb24b28bfa       nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158                                                16 seconds ago      Running             nginx                      0                   6fd403a1dd3fc       nginx
	7783190d9e178       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                   0                   0e422d0e9ac41       gcp-auth-89d5ffd79-vvmwg
	f70fe89549679       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                      0                   46b5e5b223ecf       ingress-nginx-admission-patch-ddf2l
	09d1736c50f78       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               11 minutes ago      Running             cloud-spanner-emulator     0                   469f685a00cc5       cloud-spanner-emulator-769b77f747-gxtcg
	aab758ed9983c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                     0                   be836a0ea3bf9       ingress-nginx-admission-create-wfcpf
	1f3f1f267b6e8       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago      Running             nvidia-device-plugin-ctr   0                   9ddd322f4d3e6       nvidia-device-plugin-daemonset-lqcnw
	cbc0b2fc1ba9a       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner     0                   6c807008b671e       local-path-provisioner-86d989889c-22j6s
	01f33151ebe3c       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        12 minutes ago      Running             yakd                       0                   f0c014a484820       yakd-dashboard-67d98fc6b-s65s8
	978fc98a21eaa       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner        0                   04c3fea37594e       storage-provisioner
	558f5ac8f9448       2437cf7621777                                                                                                                12 minutes ago      Running             coredns                    0                   8d6a63dae38d2       coredns-6f6b679f8f-9d5s6
	f76504d68e092       71d55d66fd4ee                                                                                                                12 minutes ago      Running             kube-proxy                 0                   c9129e481cdc8       kube-proxy-gdrtb
	84672826d6901       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                       0                   4eec5539a4207       etcd-addons-793000
	021f988646499       fcb0683e6bdbd                                                                                                                12 minutes ago      Running             kube-controller-manager    0                   cefb1e69fa296       kube-controller-manager-addons-793000
	4f29d804e33d6       cd0f0ae0ec9e0                                                                                                                12 minutes ago      Running             kube-apiserver             0                   f6aefe869fa04       kube-apiserver-addons-793000
	952a0737474ca       fbbbd428abb4d                                                                                                                12 minutes ago      Running             kube-scheduler             0                   2cde5638367bd       kube-scheduler-addons-793000
	
	
	==> coredns [558f5ac8f944] <==
	[INFO] 127.0.0.1:46182 - 41857 "HINFO IN 4961016719257337406.6541062052262547270. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004198167s
	[INFO] 10.244.0.6:48476 - 35284 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000107588s
	[INFO] 10.244.0.6:48476 - 40150 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000145941s
	[INFO] 10.244.0.6:50795 - 30244 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000028671s
	[INFO] 10.244.0.6:50795 - 17703 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000022703s
	[INFO] 10.244.0.6:49871 - 21965 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000026376s
	[INFO] 10.244.0.6:49871 - 26060 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00005392s
	[INFO] 10.244.0.6:53910 - 2209 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000040356s
	[INFO] 10.244.0.6:53910 - 45216 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000038478s
	[INFO] 10.244.0.6:56729 - 56866 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000032176s
	[INFO] 10.244.0.6:56729 - 17711 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000015024s
	[INFO] 10.244.0.6:44512 - 58125 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000029964s
	[INFO] 10.244.0.6:44512 - 54028 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000017027s
	[INFO] 10.244.0.6:42905 - 14574 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00001565s
	[INFO] 10.244.0.6:42905 - 25833 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000012896s
	[INFO] 10.244.0.6:40993 - 23114 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000011227s
	[INFO] 10.244.0.6:40993 - 62027 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000044738s
	[INFO] 10.244.0.24:59964 - 33855 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.006513441s
	[INFO] 10.244.0.24:44756 - 32992 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000321685s
	[INFO] 10.244.0.24:47912 - 46352 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.006534362s
	[INFO] 10.244.0.24:38122 - 11993 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000042675s
	[INFO] 10.244.0.24:41280 - 253 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000042633s
	[INFO] 10.244.0.24:38402 - 7732 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000031881s
	[INFO] 10.244.0.24:41248 - 44062 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.004048684s
	[INFO] 10.244.0.24:50997 - 44222 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004190835s
	
	
	==> describe nodes <==
	Name:               addons-793000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-793000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=addons-793000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_28T09_51_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-793000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 16:51:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-793000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:03:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:03:59 +0000   Wed, 28 Aug 2024 16:51:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:03:59 +0000   Wed, 28 Aug 2024 16:51:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:03:59 +0000   Wed, 28 Aug 2024 16:51:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:03:59 +0000   Wed, 28 Aug 2024 16:51:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-793000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 b386fc2cdd4c4c54a5d0cb8a1ce1fcc1
	  System UUID:                b386fc2cdd4c4c54a5d0cb8a1ce1fcc1
	  Boot ID:                    284068e7-449e-4a26-b6af-59760616cf0d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  default                     cloud-spanner-emulator-769b77f747-gxtcg                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     hello-world-app-55bf9c44b4-qkxfn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s
	  gcp-auth                    gcp-auth-89d5ffd79-vvmwg                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-9d5s6                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-793000                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-793000                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-793000                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-gdrtb                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-793000                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-lqcnw                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          helper-pod-create-pvc-5592a66e-8dcf-4b74-a843-f36e444a4d73    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  local-path-storage          local-path-provisioner-86d989889c-22j6s                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-s65s8                                0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             298Mi (7%)  426Mi (11%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node addons-793000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node addons-793000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node addons-793000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m   kubelet          Node addons-793000 status is now: NodeReady
	  Normal  RegisteredNode           12m   node-controller  Node addons-793000 event: Registered Node addons-793000 in Controller
	
	
	==> dmesg <==
	[Aug28 16:52] kauditd_printk_skb: 1 callbacks suppressed
	[  +8.753117] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.276197] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.319571] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.071479] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.837280] kauditd_printk_skb: 44 callbacks suppressed
	[ +14.386039] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.601132] kauditd_printk_skb: 12 callbacks suppressed
	[Aug28 16:53] kauditd_printk_skb: 2 callbacks suppressed
	[Aug28 16:54] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.069123] kauditd_printk_skb: 2 callbacks suppressed
	[ +17.792562] kauditd_printk_skb: 9 callbacks suppressed
	[ +11.131280] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.385920] kauditd_printk_skb: 20 callbacks suppressed
	[Aug28 16:55] kauditd_printk_skb: 2 callbacks suppressed
	[Aug28 16:58] kauditd_printk_skb: 2 callbacks suppressed
	[Aug28 17:02] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.939536] kauditd_printk_skb: 7 callbacks suppressed
	[Aug28 17:03] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.775480] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.644895] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.436394] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.316087] kauditd_printk_skb: 14 callbacks suppressed
	[ +15.597567] kauditd_printk_skb: 13 callbacks suppressed
	[  +8.403917] kauditd_printk_skb: 29 callbacks suppressed
	
	
	==> etcd [84672826d690] <==
	{"level":"info","ts":"2024-08-28T16:51:20.620780Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T16:51:20.622287Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T16:51:20.624502Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2024-08-28T16:51:20.622286Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T16:51:20.626467Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-08-28T16:51:38.918172Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.246145ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T16:51:38.918208Z","caller":"traceutil/trace.go:171","msg":"trace[1513265646] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:933; }","duration":"101.287844ms","start":"2024-08-28T16:51:38.816913Z","end":"2024-08-28T16:51:38.918201Z","steps":["trace[1513265646] 'range keys from in-memory index tree'  (duration: 101.152091ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T16:51:48.594495Z","caller":"traceutil/trace.go:171","msg":"trace[1030795230] linearizableReadLoop","detail":"{readStateIndex:996; appliedIndex:995; }","duration":"257.681117ms","start":"2024-08-28T16:51:48.336806Z","end":"2024-08-28T16:51:48.594487Z","steps":["trace[1030795230] 'read index received'  (duration: 257.24851ms)","trace[1030795230] 'applied index is now lower than readState.Index'  (duration: 432.231µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-28T16:51:48.594735Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.9215ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T16:51:48.594751Z","caller":"traceutil/trace.go:171","msg":"trace[1146774900] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:975; }","duration":"257.946832ms","start":"2024-08-28T16:51:48.336801Z","end":"2024-08-28T16:51:48.594748Z","steps":["trace[1146774900] 'agreement among raft nodes before linearized reading'  (duration: 257.911317ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T16:51:48.594911Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.091186ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T16:51:48.594929Z","caller":"traceutil/trace.go:171","msg":"trace[1332651594] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:975; }","duration":"168.110342ms","start":"2024-08-28T16:51:48.426815Z","end":"2024-08-28T16:51:48.594925Z","steps":["trace[1332651594] 'agreement among raft nodes before linearized reading'  (duration: 168.084092ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T16:51:48.594922Z","caller":"traceutil/trace.go:171","msg":"trace[1518637655] transaction","detail":"{read_only:false; response_revision:975; number_of_response:1; }","duration":"274.776366ms","start":"2024-08-28T16:51:48.319799Z","end":"2024-08-28T16:51:48.594576Z","steps":["trace[1518637655] 'process raft request'  (duration: 274.263799ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T16:51:48.595156Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.111402ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/volcano-system/volcano-admission-77d7d48b68-5cbxk.17eff29261447387\" ","response":"range_response_count:1 size:841"}
	{"level":"info","ts":"2024-08-28T16:51:48.595562Z","caller":"traceutil/trace.go:171","msg":"trace[2006062209] range","detail":"{range_begin:/registry/events/volcano-system/volcano-admission-77d7d48b68-5cbxk.17eff29261447387; range_end:; response_count:1; response_revision:975; }","duration":"150.542923ms","start":"2024-08-28T16:51:48.445015Z","end":"2024-08-28T16:51:48.595558Z","steps":["trace[2006062209] 'agreement among raft nodes before linearized reading'  (duration: 150.109649ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T16:51:48.595168Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.493107ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T16:51:48.595693Z","caller":"traceutil/trace.go:171","msg":"trace[1721900646] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:975; }","duration":"164.018027ms","start":"2024-08-28T16:51:48.431673Z","end":"2024-08-28T16:51:48.595691Z","steps":["trace[1721900646] 'agreement among raft nodes before linearized reading'  (duration: 163.488809ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T16:51:53.759715Z","caller":"traceutil/trace.go:171","msg":"trace[1299296951] transaction","detail":"{read_only:false; response_revision:988; number_of_response:1; }","duration":"115.605797ms","start":"2024-08-28T16:51:53.644098Z","end":"2024-08-28T16:51:53.759704Z","steps":["trace[1299296951] 'process raft request'  (duration: 114.685372ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T16:51:55.274526Z","caller":"traceutil/trace.go:171","msg":"trace[1744630057] linearizableReadLoop","detail":"{readStateIndex:1018; appliedIndex:1017; }","duration":"111.475795ms","start":"2024-08-28T16:51:55.163042Z","end":"2024-08-28T16:51:55.274517Z","steps":["trace[1744630057] 'read index received'  (duration: 22.255907ms)","trace[1744630057] 'applied index is now lower than readState.Index'  (duration: 89.219596ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-28T16:51:55.274573Z","caller":"traceutil/trace.go:171","msg":"trace[631704092] transaction","detail":"{read_only:false; response_revision:996; number_of_response:1; }","duration":"137.123651ms","start":"2024-08-28T16:51:55.137446Z","end":"2024-08-28T16:51:55.274570Z","steps":["trace[631704092] 'process raft request'  (duration: 47.928332ms)","trace[631704092] 'compare'  (duration: 89.090368ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-28T16:51:55.274666Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.615451ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T16:51:55.274681Z","caller":"traceutil/trace.go:171","msg":"trace[164438394] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:996; }","duration":"111.636474ms","start":"2024-08-28T16:51:55.163040Z","end":"2024-08-28T16:51:55.274677Z","steps":["trace[164438394] 'agreement among raft nodes before linearized reading'  (duration: 111.609235ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T17:01:20.669839Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1812}
	{"level":"info","ts":"2024-08-28T17:01:20.771820Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1812,"took":"98.621128ms","hash":441871775,"current-db-size-bytes":8982528,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":4718592,"current-db-size-in-use":"4.7 MB"}
	{"level":"info","ts":"2024-08-28T17:01:20.771948Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":441871775,"revision":1812,"compact-revision":-1}
	
	
	==> gcp-auth [7783190d9e17] <==
	2024/08/28 16:54:09 GCP Auth Webhook started!
	2024/08/28 16:54:25 Ready to marshal response ...
	2024/08/28 16:54:25 Ready to write response ...
	2024/08/28 16:54:26 Ready to marshal response ...
	2024/08/28 16:54:26 Ready to write response ...
	2024/08/28 16:54:48 Ready to marshal response ...
	2024/08/28 16:54:48 Ready to write response ...
	2024/08/28 16:54:48 Ready to marshal response ...
	2024/08/28 16:54:48 Ready to write response ...
	2024/08/28 16:54:48 Ready to marshal response ...
	2024/08/28 16:54:48 Ready to write response ...
	2024/08/28 17:02:54 Ready to marshal response ...
	2024/08/28 17:02:54 Ready to write response ...
	2024/08/28 17:03:00 Ready to marshal response ...
	2024/08/28 17:03:00 Ready to write response ...
	2024/08/28 17:03:10 Ready to marshal response ...
	2024/08/28 17:03:10 Ready to write response ...
	2024/08/28 17:03:41 Ready to marshal response ...
	2024/08/28 17:03:41 Ready to write response ...
	2024/08/28 17:03:51 Ready to marshal response ...
	2024/08/28 17:03:51 Ready to write response ...
	2024/08/28 17:03:59 Ready to marshal response ...
	2024/08/28 17:03:59 Ready to write response ...
	2024/08/28 17:03:59 Ready to marshal response ...
	2024/08/28 17:03:59 Ready to write response ...
	
	
	==> kernel <==
	 17:04:00 up 13 min,  0 users,  load average: 0.51, 0.58, 0.43
	Linux addons-793000 5.10.207 #1 SMP PREEMPT Tue Aug 27 17:57:16 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4f29d804e33d] <==
	W0828 16:54:40.439856       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0828 16:54:40.451982       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0828 16:54:40.468442       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0828 16:54:40.473505       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0828 16:54:40.566259       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0828 16:54:40.578453       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0828 17:03:01.386312       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0828 17:03:25.422108       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:03:25.422131       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0828 17:03:25.430660       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:03:25.430675       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0828 17:03:25.442927       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:03:25.442988       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0828 17:03:25.538698       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:03:25.538716       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0828 17:03:25.544317       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:03:25.544336       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0828 17:03:26.539003       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0828 17:03:26.544337       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0828 17:03:26.554304       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0828 17:03:36.127540       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0828 17:03:37.237817       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0828 17:03:41.465066       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0828 17:03:41.565846       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.128.145"}
	I0828 17:03:51.801111       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.154.159"}
	
	
	==> kube-controller-manager [021f98864649] <==
	E0828 17:03:42.012903       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:03:44.203495       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:03:44.203518       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0828 17:03:46.209407       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0828 17:03:46.399984       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:03:46.400081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0828 17:03:51.752008       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="17.798194ms"
	I0828 17:03:51.758788       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="6.655884ms"
	I0828 17:03:51.758821       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="13.028µs"
	I0828 17:03:51.760598       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="18.689µs"
	I0828 17:03:52.641617       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0828 17:03:52.642355       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="1.374µs"
	I0828 17:03:52.645417       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0828 17:03:53.490496       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:03:53.490540       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:03:54.149789       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:03:54.149816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0828 17:03:54.526628       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="4.840864ms"
	I0828 17:03:54.527021       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="21.976µs"
	I0828 17:03:58.438139       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0828 17:03:58.438222       1 shared_informer.go:320] Caches are synced for resource quota
	I0828 17:03:58.816598       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0828 17:03:58.816626       1 shared_informer.go:320] Caches are synced for garbage collector
	I0828 17:03:59.686435       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-793000"
	I0828 17:04:00.390856       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="1.499µs"
	
	
	==> kube-proxy [f76504d68e09] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0828 16:51:29.308691       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0828 16:51:29.318970       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	E0828 16:51:29.319005       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 16:51:29.335523       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0828 16:51:29.335545       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0828 16:51:29.335559       1 server_linux.go:169] "Using iptables Proxier"
	I0828 16:51:29.336172       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 16:51:29.336332       1 server.go:483] "Version info" version="v1.31.0"
	I0828 16:51:29.336344       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 16:51:29.337172       1 config.go:197] "Starting service config controller"
	I0828 16:51:29.337192       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 16:51:29.337211       1 config.go:104] "Starting endpoint slice config controller"
	I0828 16:51:29.337219       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 16:51:29.337473       1 config.go:326] "Starting node config controller"
	I0828 16:51:29.337482       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 16:51:29.438002       1 shared_informer.go:320] Caches are synced for node config
	I0828 16:51:29.438027       1 shared_informer.go:320] Caches are synced for service config
	I0828 16:51:29.438057       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [952a0737474c] <==
	W0828 16:51:21.178926       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0828 16:51:21.179279       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:51:21.179157       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0828 16:51:21.179342       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:51:21.179177       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0828 16:51:21.179413       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 16:51:21.179212       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0828 16:51:21.179730       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 16:51:21.179228       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0828 16:51:21.180035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 16:51:21.179241       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0828 16:51:21.180108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 16:51:21.179639       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0828 16:51:21.180163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0828 16:51:21.179659       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0828 16:51:21.180320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0828 16:51:21.179703       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0828 16:51:21.180336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0828 16:51:21.179715       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0828 16:51:21.180343       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:51:21.999707       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0828 16:51:21.999759       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0828 16:51:22.091540       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0828 16:51:22.091667       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0828 16:51:23.776741       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 28 17:03:59 addons-793000 kubelet[2049]: I0828 17:03:59.876836    2049 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6761840-c615-4fb6-ac36-9249dd5e1266" containerName="controller"
	Aug 28 17:04:00 addons-793000 kubelet[2049]: I0828 17:04:00.023801    2049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/ec656a0c-eb2b-4d89-b0b9-fef7edd1d2a4-script\") pod \"helper-pod-create-pvc-5592a66e-8dcf-4b74-a843-f36e444a4d73\" (UID: \"ec656a0c-eb2b-4d89-b0b9-fef7edd1d2a4\") " pod="local-path-storage/helper-pod-create-pvc-5592a66e-8dcf-4b74-a843-f36e444a4d73"
	Aug 28 17:04:00 addons-793000 kubelet[2049]: I0828 17:04:00.023837    2049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/ec656a0c-eb2b-4d89-b0b9-fef7edd1d2a4-data\") pod \"helper-pod-create-pvc-5592a66e-8dcf-4b74-a843-f36e444a4d73\" (UID: \"ec656a0c-eb2b-4d89-b0b9-fef7edd1d2a4\") " pod="local-path-storage/helper-pod-create-pvc-5592a66e-8dcf-4b74-a843-f36e444a4d73"
	Aug 28 17:04:00 addons-793000 kubelet[2049]: I0828 17:04:00.023850    2049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ec656a0c-eb2b-4d89-b0b9-fef7edd1d2a4-gcp-creds\") pod \"helper-pod-create-pvc-5592a66e-8dcf-4b74-a843-f36e444a4d73\" (UID: \"ec656a0c-eb2b-4d89-b0b9-fef7edd1d2a4\") " pod="local-path-storage/helper-pod-create-pvc-5592a66e-8dcf-4b74-a843-f36e444a4d73"
	Aug 28 17:04:00 addons-793000 kubelet[2049]: I0828 17:04:00.023864    2049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv8lr\" (UniqueName: \"kubernetes.io/projected/ec656a0c-eb2b-4d89-b0b9-fef7edd1d2a4-kube-api-access-sv8lr\") pod \"helper-pod-create-pvc-5592a66e-8dcf-4b74-a843-f36e444a4d73\" (UID: \"ec656a0c-eb2b-4d89-b0b9-fef7edd1d2a4\") " pod="local-path-storage/helper-pod-create-pvc-5592a66e-8dcf-4b74-a843-f36e444a4d73"
	Aug 28 17:04:00 addons-793000 kubelet[2049]: I0828 17:04:00.326409    2049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6h4f\" (UniqueName: \"kubernetes.io/projected/afa40fb6-aa3b-4c2d-907b-4ba79017391c-kube-api-access-n6h4f\") pod \"afa40fb6-aa3b-4c2d-907b-4ba79017391c\" (UID: \"afa40fb6-aa3b-4c2d-907b-4ba79017391c\") "
	Aug 28 17:04:00 addons-793000 kubelet[2049]: I0828 17:04:00.326436    2049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/afa40fb6-aa3b-4c2d-907b-4ba79017391c-gcp-creds\") pod \"afa40fb6-aa3b-4c2d-907b-4ba79017391c\" (UID: \"afa40fb6-aa3b-4c2d-907b-4ba79017391c\") "
	Aug 28 17:04:00 addons-793000 kubelet[2049]: I0828 17:04:00.326479    2049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afa40fb6-aa3b-4c2d-907b-4ba79017391c-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "afa40fb6-aa3b-4c2d-907b-4ba79017391c" (UID: "afa40fb6-aa3b-4c2d-907b-4ba79017391c"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 28 17:04:00 addons-793000 kubelet[2049]: I0828 17:04:00.327201    2049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afa40fb6-aa3b-4c2d-907b-4ba79017391c-kube-api-access-n6h4f" (OuterVolumeSpecName: "kube-api-access-n6h4f") pod "afa40fb6-aa3b-4c2d-907b-4ba79017391c" (UID: "afa40fb6-aa3b-4c2d-907b-4ba79017391c"). InnerVolumeSpecName "kube-api-access-n6h4f". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 28 17:04:00 addons-793000 kubelet[2049]: I0828 17:04:00.427567    2049 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-n6h4f\" (UniqueName: \"kubernetes.io/projected/afa40fb6-aa3b-4c2d-907b-4ba79017391c-kube-api-access-n6h4f\") on node \"addons-793000\" DevicePath \"\""
	Aug 28 17:04:00 addons-793000 kubelet[2049]: I0828 17:04:00.427606    2049 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/afa40fb6-aa3b-4c2d-907b-4ba79017391c-gcp-creds\") on node \"addons-793000\" DevicePath \"\""
	Aug 28 17:04:00 addons-793000 kubelet[2049]: I0828 17:04:00.528460    2049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfqft\" (UniqueName: \"kubernetes.io/projected/3738eb26-8c6c-4525-be41-2bb099331da6-kube-api-access-vfqft\") pod \"3738eb26-8c6c-4525-be41-2bb099331da6\" (UID: \"3738eb26-8c6c-4525-be41-2bb099331da6\") "
	Aug 28 17:04:00 addons-793000 kubelet[2049]: I0828 17:04:00.529647    2049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3738eb26-8c6c-4525-be41-2bb099331da6-kube-api-access-vfqft" (OuterVolumeSpecName: "kube-api-access-vfqft") pod "3738eb26-8c6c-4525-be41-2bb099331da6" (UID: "3738eb26-8c6c-4525-be41-2bb099331da6"). InnerVolumeSpecName "kube-api-access-vfqft". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 28 17:04:00 addons-793000 kubelet[2049]: I0828 17:04:00.628919    2049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl6hq\" (UniqueName: \"kubernetes.io/projected/c792a3f0-ab28-4331-bc5f-776dcca7e356-kube-api-access-jl6hq\") pod \"c792a3f0-ab28-4331-bc5f-776dcca7e356\" (UID: \"c792a3f0-ab28-4331-bc5f-776dcca7e356\") "
	Aug 28 17:04:00 addons-793000 kubelet[2049]: I0828 17:04:00.628955    2049 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vfqft\" (UniqueName: \"kubernetes.io/projected/3738eb26-8c6c-4525-be41-2bb099331da6-kube-api-access-vfqft\") on node \"addons-793000\" DevicePath \"\""
	Aug 28 17:04:00 addons-793000 kubelet[2049]: I0828 17:04:00.629664    2049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c792a3f0-ab28-4331-bc5f-776dcca7e356-kube-api-access-jl6hq" (OuterVolumeSpecName: "kube-api-access-jl6hq") pod "c792a3f0-ab28-4331-bc5f-776dcca7e356" (UID: "c792a3f0-ab28-4331-bc5f-776dcca7e356"). InnerVolumeSpecName "kube-api-access-jl6hq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 28 17:04:00 addons-793000 kubelet[2049]: I0828 17:04:00.650720    2049 scope.go:117] "RemoveContainer" containerID="5d80d2e517f3f82690e09428c6ef52c03d0e96955841ad1dfb2e0159cab50ad1"
	Aug 28 17:04:00 addons-793000 kubelet[2049]: I0828 17:04:00.670547    2049 scope.go:117] "RemoveContainer" containerID="5d80d2e517f3f82690e09428c6ef52c03d0e96955841ad1dfb2e0159cab50ad1"
	Aug 28 17:04:00 addons-793000 kubelet[2049]: E0828 17:04:00.670922    2049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 5d80d2e517f3f82690e09428c6ef52c03d0e96955841ad1dfb2e0159cab50ad1" containerID="5d80d2e517f3f82690e09428c6ef52c03d0e96955841ad1dfb2e0159cab50ad1"
	Aug 28 17:04:00 addons-793000 kubelet[2049]: I0828 17:04:00.670937    2049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"5d80d2e517f3f82690e09428c6ef52c03d0e96955841ad1dfb2e0159cab50ad1"} err="failed to get container status \"5d80d2e517f3f82690e09428c6ef52c03d0e96955841ad1dfb2e0159cab50ad1\": rpc error: code = Unknown desc = Error response from daemon: No such container: 5d80d2e517f3f82690e09428c6ef52c03d0e96955841ad1dfb2e0159cab50ad1"
	Aug 28 17:04:00 addons-793000 kubelet[2049]: I0828 17:04:00.671428    2049 scope.go:117] "RemoveContainer" containerID="6ba5dfed56857308e614cc1032e38087084559b2a2b623dff8ec11c524b2e8a1"
	Aug 28 17:04:00 addons-793000 kubelet[2049]: I0828 17:04:00.684282    2049 scope.go:117] "RemoveContainer" containerID="6ba5dfed56857308e614cc1032e38087084559b2a2b623dff8ec11c524b2e8a1"
	Aug 28 17:04:00 addons-793000 kubelet[2049]: E0828 17:04:00.684731    2049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 6ba5dfed56857308e614cc1032e38087084559b2a2b623dff8ec11c524b2e8a1" containerID="6ba5dfed56857308e614cc1032e38087084559b2a2b623dff8ec11c524b2e8a1"
	Aug 28 17:04:00 addons-793000 kubelet[2049]: I0828 17:04:00.684748    2049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"6ba5dfed56857308e614cc1032e38087084559b2a2b623dff8ec11c524b2e8a1"} err="failed to get container status \"6ba5dfed56857308e614cc1032e38087084559b2a2b623dff8ec11c524b2e8a1\": rpc error: code = Unknown desc = Error response from daemon: No such container: 6ba5dfed56857308e614cc1032e38087084559b2a2b623dff8ec11c524b2e8a1"
	Aug 28 17:04:00 addons-793000 kubelet[2049]: I0828 17:04:00.729382    2049 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jl6hq\" (UniqueName: \"kubernetes.io/projected/c792a3f0-ab28-4331-bc5f-776dcca7e356-kube-api-access-jl6hq\") on node \"addons-793000\" DevicePath \"\""
	
	
	==> storage-provisioner [978fc98a21ea] <==
	I0828 16:51:32.100838       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0828 16:51:32.143626       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0828 16:51:32.143653       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0828 16:51:32.204077       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0828 16:51:32.204162       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-793000_731750f7-a34e-4a5d-8dcc-dccdd6d1a2fa!
	I0828 16:51:32.204561       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2d25b60f-252a-4b1c-b25c-ef9ab2176636", APIVersion:"v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-793000_731750f7-a34e-4a5d-8dcc-dccdd6d1a2fa became leader
	I0828 16:51:32.304788       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-793000_731750f7-a34e-4a5d-8dcc-dccdd6d1a2fa!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-793000 -n addons-793000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-793000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox test-local-path helper-pod-create-pvc-5592a66e-8dcf-4b74-a843-f36e444a4d73
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-793000 describe pod busybox test-local-path helper-pod-create-pvc-5592a66e-8dcf-4b74-a843-f36e444a4d73
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-793000 describe pod busybox test-local-path helper-pod-create-pvc-5592a66e-8dcf-4b74-a843-f36e444a4d73: exit status 1 (46.308792ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-793000/192.168.105.2
	Start Time:       Wed, 28 Aug 2024 09:54:48 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sd65l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-sd65l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m13s                  default-scheduler  Successfully assigned default/busybox to addons-793000
	  Normal   Pulling    7m53s (x4 over 9m12s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m53s (x4 over 9m12s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m53s (x4 over 9m12s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m27s (x6 over 9m11s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m2s (x21 over 9m11s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lvzct (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-lvzct:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "helper-pod-create-pvc-5592a66e-8dcf-4b74-a843-f36e444a4d73" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-793000 describe pod busybox test-local-path helper-pod-create-pvc-5592a66e-8dcf-4b74-a843-f36e444a4d73: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.27s)

                                                
                                    
x
+
TestCertOptions (10.18s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-402000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-402000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.903820041s)

                                                
                                                
-- stdout --
	* [cert-options-402000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-402000" primary control-plane node in "cert-options-402000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-402000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-402000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-402000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-402000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-402000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.285917ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-402000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-402000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-402000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-402000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-402000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-402000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (54.337459ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-402000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-402000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-402000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-402000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-402000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-28 10:36:44.966798 -0700 PDT m=+2776.887916334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-402000 -n cert-options-402000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-402000 -n cert-options-402000: exit status 7 (30.773541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-402000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-402000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-402000
--- FAIL: TestCertOptions (10.18s)

                                                
                                    
x
+
TestCertExpiration (195.41s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-705000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-705000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.05359825s)

                                                
                                                
-- stdout --
	* [cert-expiration-705000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-705000" primary control-plane node in "cert-expiration-705000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-705000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-705000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-705000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-705000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-705000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.215692625s)

                                                
                                                
-- stdout --
	* [cert-expiration-705000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-705000" primary control-plane node in "cert-expiration-705000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-705000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-705000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-705000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-705000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-705000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-705000" primary control-plane node in "cert-expiration-705000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-705000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-705000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-705000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-28 10:39:45.050463 -0700 PDT m=+2956.978080751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-705000 -n cert-expiration-705000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-705000 -n cert-expiration-705000: exit status 7 (58.778875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-705000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-705000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-705000
--- FAIL: TestCertExpiration (195.41s)

                                                
                                    
x
+
TestDockerFlags (10.18s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-261000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-261000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.949141792s)

                                                
                                                
-- stdout --
	* [docker-flags-261000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-261000" primary control-plane node in "docker-flags-261000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-261000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:36:24.738298    4472 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:36:24.738424    4472 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:36:24.738428    4472 out.go:358] Setting ErrFile to fd 2...
	I0828 10:36:24.738430    4472 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:36:24.738588    4472 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:36:24.739658    4472 out.go:352] Setting JSON to false
	I0828 10:36:24.755803    4472 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3948,"bootTime":1724862636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:36:24.755878    4472 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:36:24.761351    4472 out.go:177] * [docker-flags-261000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:36:24.767267    4472 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:36:24.767323    4472 notify.go:220] Checking for updates...
	I0828 10:36:24.775152    4472 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:36:24.778274    4472 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:36:24.781297    4472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:36:24.784282    4472 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:36:24.787213    4472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:36:24.790548    4472 config.go:182] Loaded profile config "force-systemd-flag-581000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:36:24.790618    4472 config.go:182] Loaded profile config "multinode-223000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:36:24.790661    4472 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:36:24.794174    4472 out.go:177] * Using the qemu2 driver based on user configuration
	I0828 10:36:24.801212    4472 start.go:297] selected driver: qemu2
	I0828 10:36:24.801218    4472 start.go:901] validating driver "qemu2" against <nil>
	I0828 10:36:24.801223    4472 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:36:24.803563    4472 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 10:36:24.806140    4472 out.go:177] * Automatically selected the socket_vmnet network
	I0828 10:36:24.809271    4472 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0828 10:36:24.809306    4472 cni.go:84] Creating CNI manager for ""
	I0828 10:36:24.809313    4472 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:36:24.809317    4472 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 10:36:24.809345    4472 start.go:340] cluster config:
	{Name:docker-flags-261000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-261000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:36:24.813237    4472 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:36:24.821189    4472 out.go:177] * Starting "docker-flags-261000" primary control-plane node in "docker-flags-261000" cluster
	I0828 10:36:24.825202    4472 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:36:24.825218    4472 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:36:24.825230    4472 cache.go:56] Caching tarball of preloaded images
	I0828 10:36:24.825293    4472 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:36:24.825299    4472 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:36:24.825374    4472 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/docker-flags-261000/config.json ...
	I0828 10:36:24.825392    4472 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/docker-flags-261000/config.json: {Name:mk9d60ca4e6219d25cd9f50febf786fb09b2ebdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:36:24.825606    4472 start.go:360] acquireMachinesLock for docker-flags-261000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:36:24.825641    4472 start.go:364] duration metric: took 27.958µs to acquireMachinesLock for "docker-flags-261000"
	I0828 10:36:24.825653    4472 start.go:93] Provisioning new machine with config: &{Name:docker-flags-261000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-261000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:36:24.825682    4472 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:36:24.834262    4472 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0828 10:36:24.852201    4472 start.go:159] libmachine.API.Create for "docker-flags-261000" (driver="qemu2")
	I0828 10:36:24.852235    4472 client.go:168] LocalClient.Create starting
	I0828 10:36:24.852302    4472 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:36:24.852335    4472 main.go:141] libmachine: Decoding PEM data...
	I0828 10:36:24.852346    4472 main.go:141] libmachine: Parsing certificate...
	I0828 10:36:24.852387    4472 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:36:24.852409    4472 main.go:141] libmachine: Decoding PEM data...
	I0828 10:36:24.852415    4472 main.go:141] libmachine: Parsing certificate...
	I0828 10:36:24.852769    4472 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:36:25.013512    4472 main.go:141] libmachine: Creating SSH key...
	I0828 10:36:25.129502    4472 main.go:141] libmachine: Creating Disk image...
	I0828 10:36:25.129508    4472 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:36:25.129706    4472 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/docker-flags-261000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/docker-flags-261000/disk.qcow2
	I0828 10:36:25.139334    4472 main.go:141] libmachine: STDOUT: 
	I0828 10:36:25.139351    4472 main.go:141] libmachine: STDERR: 
	I0828 10:36:25.139411    4472 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/docker-flags-261000/disk.qcow2 +20000M
	I0828 10:36:25.147308    4472 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:36:25.147330    4472 main.go:141] libmachine: STDERR: 
	I0828 10:36:25.147344    4472 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/docker-flags-261000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/docker-flags-261000/disk.qcow2
	I0828 10:36:25.147348    4472 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:36:25.147357    4472 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:36:25.147395    4472 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/docker-flags-261000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/docker-flags-261000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/docker-flags-261000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:d0:27:96:53:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/docker-flags-261000/disk.qcow2
	I0828 10:36:25.149009    4472 main.go:141] libmachine: STDOUT: 
	I0828 10:36:25.149024    4472 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:36:25.149041    4472 client.go:171] duration metric: took 296.812417ms to LocalClient.Create
	I0828 10:36:27.151145    4472 start.go:128] duration metric: took 2.325525625s to createHost
	I0828 10:36:27.151208    4472 start.go:83] releasing machines lock for "docker-flags-261000", held for 2.325636834s
	W0828 10:36:27.151306    4472 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:36:27.169262    4472 out.go:177] * Deleting "docker-flags-261000" in qemu2 ...
	W0828 10:36:27.201771    4472 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:36:27.201794    4472 start.go:729] Will try again in 5 seconds ...
	I0828 10:36:32.203831    4472 start.go:360] acquireMachinesLock for docker-flags-261000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:36:32.254127    4472 start.go:364] duration metric: took 50.194542ms to acquireMachinesLock for "docker-flags-261000"
	I0828 10:36:32.254294    4472 start.go:93] Provisioning new machine with config: &{Name:docker-flags-261000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-261000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:36:32.254585    4472 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:36:32.270244    4472 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0828 10:36:32.319040    4472 start.go:159] libmachine.API.Create for "docker-flags-261000" (driver="qemu2")
	I0828 10:36:32.319085    4472 client.go:168] LocalClient.Create starting
	I0828 10:36:32.319209    4472 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:36:32.319278    4472 main.go:141] libmachine: Decoding PEM data...
	I0828 10:36:32.319296    4472 main.go:141] libmachine: Parsing certificate...
	I0828 10:36:32.319354    4472 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:36:32.319399    4472 main.go:141] libmachine: Decoding PEM data...
	I0828 10:36:32.319413    4472 main.go:141] libmachine: Parsing certificate...
	I0828 10:36:32.319934    4472 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:36:32.510932    4472 main.go:141] libmachine: Creating SSH key...
	I0828 10:36:32.579868    4472 main.go:141] libmachine: Creating Disk image...
	I0828 10:36:32.579873    4472 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:36:32.580046    4472 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/docker-flags-261000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/docker-flags-261000/disk.qcow2
	I0828 10:36:32.589548    4472 main.go:141] libmachine: STDOUT: 
	I0828 10:36:32.589575    4472 main.go:141] libmachine: STDERR: 
	I0828 10:36:32.589620    4472 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/docker-flags-261000/disk.qcow2 +20000M
	I0828 10:36:32.597683    4472 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:36:32.597696    4472 main.go:141] libmachine: STDERR: 
	I0828 10:36:32.597710    4472 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/docker-flags-261000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/docker-flags-261000/disk.qcow2
	I0828 10:36:32.597719    4472 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:36:32.597726    4472 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:36:32.597750    4472 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/docker-flags-261000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/docker-flags-261000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/docker-flags-261000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:88:3f:d0:eb:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/docker-flags-261000/disk.qcow2
	I0828 10:36:32.599350    4472 main.go:141] libmachine: STDOUT: 
	I0828 10:36:32.599364    4472 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:36:32.599376    4472 client.go:171] duration metric: took 280.29525ms to LocalClient.Create
	I0828 10:36:34.601474    4472 start.go:128] duration metric: took 2.346936917s to createHost
	I0828 10:36:34.601541    4472 start.go:83] releasing machines lock for "docker-flags-261000", held for 2.34747075s
	W0828 10:36:34.601868    4472 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-261000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-261000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:36:34.615431    4472 out.go:201] 
	W0828 10:36:34.629421    4472 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:36:34.629455    4472 out.go:270] * 
	* 
	W0828 10:36:34.631304    4472 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:36:34.645441    4472 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-261000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-261000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-261000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.966375ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-261000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-261000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-261000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-261000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-261000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-261000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-261000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-261000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-261000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.699ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-261000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-261000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-261000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-261000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-261000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-261000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-28 10:36:34.786391 -0700 PDT m=+2766.707141042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-261000 -n docker-flags-261000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-261000 -n docker-flags-261000: exit status 7 (34.047709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-261000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-261000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-261000
--- FAIL: TestDockerFlags (10.18s)

                                                
                                    
x
+
TestForceSystemdFlag (10.02s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-581000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-581000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.823015125s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-581000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-581000" primary control-plane node in "force-systemd-flag-581000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-581000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:36:19.791185    4451 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:36:19.791327    4451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:36:19.791332    4451 out.go:358] Setting ErrFile to fd 2...
	I0828 10:36:19.791335    4451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:36:19.791467    4451 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:36:19.792583    4451 out.go:352] Setting JSON to false
	I0828 10:36:19.808445    4451 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3943,"bootTime":1724862636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:36:19.808512    4451 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:36:19.814495    4451 out.go:177] * [force-systemd-flag-581000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:36:19.822473    4451 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:36:19.822545    4451 notify.go:220] Checking for updates...
	I0828 10:36:19.832458    4451 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:36:19.842486    4451 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:36:19.850462    4451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:36:19.854472    4451 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:36:19.857435    4451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:36:19.860814    4451 config.go:182] Loaded profile config "force-systemd-env-611000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:36:19.860891    4451 config.go:182] Loaded profile config "multinode-223000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:36:19.860948    4451 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:36:19.865521    4451 out.go:177] * Using the qemu2 driver based on user configuration
	I0828 10:36:19.872428    4451 start.go:297] selected driver: qemu2
	I0828 10:36:19.872438    4451 start.go:901] validating driver "qemu2" against <nil>
	I0828 10:36:19.872444    4451 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:36:19.874699    4451 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 10:36:19.877500    4451 out.go:177] * Automatically selected the socket_vmnet network
	I0828 10:36:19.880511    4451 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0828 10:36:19.880533    4451 cni.go:84] Creating CNI manager for ""
	I0828 10:36:19.880542    4451 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:36:19.880550    4451 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 10:36:19.880599    4451 start.go:340] cluster config:
	{Name:force-systemd-flag-581000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:36:19.884391    4451 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:36:19.891498    4451 out.go:177] * Starting "force-systemd-flag-581000" primary control-plane node in "force-systemd-flag-581000" cluster
	I0828 10:36:19.895496    4451 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:36:19.895509    4451 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:36:19.895517    4451 cache.go:56] Caching tarball of preloaded images
	I0828 10:36:19.895569    4451 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:36:19.895575    4451 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:36:19.895640    4451 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/force-systemd-flag-581000/config.json ...
	I0828 10:36:19.895652    4451 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/force-systemd-flag-581000/config.json: {Name:mkcf4858d2936cd4de85c9aa13675c4e2c70ba78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:36:19.896044    4451 start.go:360] acquireMachinesLock for force-systemd-flag-581000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:36:19.896082    4451 start.go:364] duration metric: took 29.708µs to acquireMachinesLock for "force-systemd-flag-581000"
	I0828 10:36:19.896094    4451 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:36:19.896119    4451 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:36:19.904434    4451 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0828 10:36:19.923546    4451 start.go:159] libmachine.API.Create for "force-systemd-flag-581000" (driver="qemu2")
	I0828 10:36:19.923580    4451 client.go:168] LocalClient.Create starting
	I0828 10:36:19.923645    4451 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:36:19.923683    4451 main.go:141] libmachine: Decoding PEM data...
	I0828 10:36:19.923692    4451 main.go:141] libmachine: Parsing certificate...
	I0828 10:36:19.923731    4451 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:36:19.923755    4451 main.go:141] libmachine: Decoding PEM data...
	I0828 10:36:19.923766    4451 main.go:141] libmachine: Parsing certificate...
	I0828 10:36:19.924157    4451 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:36:20.085992    4451 main.go:141] libmachine: Creating SSH key...
	I0828 10:36:20.141048    4451 main.go:141] libmachine: Creating Disk image...
	I0828 10:36:20.141053    4451 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:36:20.141218    4451 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-flag-581000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-flag-581000/disk.qcow2
	I0828 10:36:20.150272    4451 main.go:141] libmachine: STDOUT: 
	I0828 10:36:20.150292    4451 main.go:141] libmachine: STDERR: 
	I0828 10:36:20.150330    4451 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-flag-581000/disk.qcow2 +20000M
	I0828 10:36:20.158217    4451 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:36:20.158244    4451 main.go:141] libmachine: STDERR: 
	I0828 10:36:20.158258    4451 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-flag-581000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-flag-581000/disk.qcow2
	I0828 10:36:20.158262    4451 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:36:20.158274    4451 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:36:20.158300    4451 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-flag-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-flag-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-flag-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:19:f6:26:46:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-flag-581000/disk.qcow2
	I0828 10:36:20.159906    4451 main.go:141] libmachine: STDOUT: 
	I0828 10:36:20.159921    4451 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:36:20.159938    4451 client.go:171] duration metric: took 236.360167ms to LocalClient.Create
	I0828 10:36:22.162042    4451 start.go:128] duration metric: took 2.265982166s to createHost
	I0828 10:36:22.162104    4451 start.go:83] releasing machines lock for "force-systemd-flag-581000", held for 2.266094459s
	W0828 10:36:22.162209    4451 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:36:22.192326    4451 out.go:177] * Deleting "force-systemd-flag-581000" in qemu2 ...
	W0828 10:36:22.218050    4451 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:36:22.218067    4451 start.go:729] Will try again in 5 seconds ...
	I0828 10:36:27.220090    4451 start.go:360] acquireMachinesLock for force-systemd-flag-581000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:36:27.220429    4451 start.go:364] duration metric: took 241.125µs to acquireMachinesLock for "force-systemd-flag-581000"
	I0828 10:36:27.220540    4451 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:36:27.220806    4451 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:36:27.229335    4451 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0828 10:36:27.272453    4451 start.go:159] libmachine.API.Create for "force-systemd-flag-581000" (driver="qemu2")
	I0828 10:36:27.272499    4451 client.go:168] LocalClient.Create starting
	I0828 10:36:27.272604    4451 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:36:27.272673    4451 main.go:141] libmachine: Decoding PEM data...
	I0828 10:36:27.272688    4451 main.go:141] libmachine: Parsing certificate...
	I0828 10:36:27.272743    4451 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:36:27.272785    4451 main.go:141] libmachine: Decoding PEM data...
	I0828 10:36:27.272797    4451 main.go:141] libmachine: Parsing certificate...
	I0828 10:36:27.273962    4451 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:36:27.455281    4451 main.go:141] libmachine: Creating SSH key...
	I0828 10:36:27.517193    4451 main.go:141] libmachine: Creating Disk image...
	I0828 10:36:27.517198    4451 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:36:27.517373    4451 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-flag-581000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-flag-581000/disk.qcow2
	I0828 10:36:27.526676    4451 main.go:141] libmachine: STDOUT: 
	I0828 10:36:27.526693    4451 main.go:141] libmachine: STDERR: 
	I0828 10:36:27.526747    4451 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-flag-581000/disk.qcow2 +20000M
	I0828 10:36:27.534594    4451 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:36:27.534608    4451 main.go:141] libmachine: STDERR: 
	I0828 10:36:27.534620    4451 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-flag-581000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-flag-581000/disk.qcow2
	I0828 10:36:27.534625    4451 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:36:27.534634    4451 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:36:27.534667    4451 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-flag-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-flag-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-flag-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:60:36:94:e1:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-flag-581000/disk.qcow2
	I0828 10:36:27.536272    4451 main.go:141] libmachine: STDOUT: 
	I0828 10:36:27.536293    4451 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:36:27.536305    4451 client.go:171] duration metric: took 263.809417ms to LocalClient.Create
	I0828 10:36:29.538429    4451 start.go:128] duration metric: took 2.317670541s to createHost
	I0828 10:36:29.538498    4451 start.go:83] releasing machines lock for "force-systemd-flag-581000", held for 2.318117166s
	W0828 10:36:29.538910    4451 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-581000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-581000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:36:29.553565    4451 out.go:201] 
	W0828 10:36:29.557706    4451 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:36:29.557733    4451 out.go:270] * 
	* 
	W0828 10:36:29.560343    4451 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:36:29.574586    4451 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-581000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-581000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-581000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.401417ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-581000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-581000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-581000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-28 10:36:29.667067 -0700 PDT m=+2761.587632751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-581000 -n force-systemd-flag-581000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-581000 -n force-systemd-flag-581000: exit status 7 (35.091417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-581000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-581000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-581000
--- FAIL: TestForceSystemdFlag (10.02s)

                                                
                                    
x
+
TestForceSystemdEnv (10.84s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-611000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-611000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.645344333s)

                                                
                                                
-- stdout --
	* [force-systemd-env-611000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-611000" primary control-plane node in "force-systemd-env-611000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-611000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:36:13.902746    4415 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:36:13.902857    4415 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:36:13.902861    4415 out.go:358] Setting ErrFile to fd 2...
	I0828 10:36:13.902869    4415 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:36:13.902992    4415 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:36:13.904214    4415 out.go:352] Setting JSON to false
	I0828 10:36:13.922046    4415 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3937,"bootTime":1724862636,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:36:13.922123    4415 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:36:13.928824    4415 out.go:177] * [force-systemd-env-611000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:36:13.935806    4415 notify.go:220] Checking for updates...
	I0828 10:36:13.939654    4415 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:36:13.946606    4415 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:36:13.954642    4415 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:36:13.961670    4415 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:36:13.965663    4415 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:36:13.973752    4415 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0828 10:36:13.977821    4415 config.go:182] Loaded profile config "multinode-223000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:36:13.977876    4415 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:36:13.981679    4415 out.go:177] * Using the qemu2 driver based on user configuration
	I0828 10:36:13.990360    4415 start.go:297] selected driver: qemu2
	I0828 10:36:13.990366    4415 start.go:901] validating driver "qemu2" against <nil>
	I0828 10:36:13.990371    4415 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:36:13.992746    4415 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 10:36:13.996744    4415 out.go:177] * Automatically selected the socket_vmnet network
	I0828 10:36:13.999705    4415 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0828 10:36:13.999721    4415 cni.go:84] Creating CNI manager for ""
	I0828 10:36:13.999727    4415 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:36:13.999732    4415 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 10:36:13.999760    4415 start.go:340] cluster config:
	{Name:force-systemd-env-611000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-611000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:36:14.003359    4415 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:36:14.011638    4415 out.go:177] * Starting "force-systemd-env-611000" primary control-plane node in "force-systemd-env-611000" cluster
	I0828 10:36:14.014657    4415 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:36:14.014684    4415 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:36:14.014693    4415 cache.go:56] Caching tarball of preloaded images
	I0828 10:36:14.014773    4415 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:36:14.014779    4415 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:36:14.014843    4415 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/force-systemd-env-611000/config.json ...
	I0828 10:36:14.014854    4415 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/force-systemd-env-611000/config.json: {Name:mk8c7b1523bf20cd26984e64a739cf6d47ff9923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:36:14.015058    4415 start.go:360] acquireMachinesLock for force-systemd-env-611000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:36:14.015089    4415 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "force-systemd-env-611000"
	I0828 10:36:14.015100    4415 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-611000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-611000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:36:14.015127    4415 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:36:14.018653    4415 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0828 10:36:14.034539    4415 start.go:159] libmachine.API.Create for "force-systemd-env-611000" (driver="qemu2")
	I0828 10:36:14.034565    4415 client.go:168] LocalClient.Create starting
	I0828 10:36:14.034646    4415 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:36:14.034678    4415 main.go:141] libmachine: Decoding PEM data...
	I0828 10:36:14.034690    4415 main.go:141] libmachine: Parsing certificate...
	I0828 10:36:14.034731    4415 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:36:14.034754    4415 main.go:141] libmachine: Decoding PEM data...
	I0828 10:36:14.034761    4415 main.go:141] libmachine: Parsing certificate...
	I0828 10:36:14.035083    4415 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:36:14.197256    4415 main.go:141] libmachine: Creating SSH key...
	I0828 10:36:14.397621    4415 main.go:141] libmachine: Creating Disk image...
	I0828 10:36:14.397629    4415 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:36:14.397812    4415 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-env-611000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-env-611000/disk.qcow2
	I0828 10:36:14.407490    4415 main.go:141] libmachine: STDOUT: 
	I0828 10:36:14.407507    4415 main.go:141] libmachine: STDERR: 
	I0828 10:36:14.407555    4415 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-env-611000/disk.qcow2 +20000M
	I0828 10:36:14.415955    4415 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:36:14.415972    4415 main.go:141] libmachine: STDERR: 
	I0828 10:36:14.415995    4415 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-env-611000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-env-611000/disk.qcow2
	I0828 10:36:14.416000    4415 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:36:14.416012    4415 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:36:14.416044    4415 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-env-611000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-env-611000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-env-611000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:c6:cf:fb:e3:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-env-611000/disk.qcow2
	I0828 10:36:14.417725    4415 main.go:141] libmachine: STDOUT: 
	I0828 10:36:14.417740    4415 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:36:14.417758    4415 client.go:171] duration metric: took 383.204375ms to LocalClient.Create
	I0828 10:36:16.419003    4415 start.go:128] duration metric: took 2.403955708s to createHost
	I0828 10:36:16.419020    4415 start.go:83] releasing machines lock for "force-systemd-env-611000", held for 2.40401325s
	W0828 10:36:16.419048    4415 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:36:16.426171    4415 out.go:177] * Deleting "force-systemd-env-611000" in qemu2 ...
	W0828 10:36:16.435954    4415 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:36:16.435960    4415 start.go:729] Will try again in 5 seconds ...
	I0828 10:36:21.437960    4415 start.go:360] acquireMachinesLock for force-systemd-env-611000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:36:22.162275    4415 start.go:364] duration metric: took 724.25075ms to acquireMachinesLock for "force-systemd-env-611000"
	I0828 10:36:22.162431    4415 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-611000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-611000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:36:22.162671    4415 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:36:22.179335    4415 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0828 10:36:22.230093    4415 start.go:159] libmachine.API.Create for "force-systemd-env-611000" (driver="qemu2")
	I0828 10:36:22.230158    4415 client.go:168] LocalClient.Create starting
	I0828 10:36:22.230281    4415 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:36:22.230336    4415 main.go:141] libmachine: Decoding PEM data...
	I0828 10:36:22.230352    4415 main.go:141] libmachine: Parsing certificate...
	I0828 10:36:22.230430    4415 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:36:22.230475    4415 main.go:141] libmachine: Decoding PEM data...
	I0828 10:36:22.230487    4415 main.go:141] libmachine: Parsing certificate...
	I0828 10:36:22.231060    4415 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:36:22.413417    4415 main.go:141] libmachine: Creating SSH key...
	I0828 10:36:22.444041    4415 main.go:141] libmachine: Creating Disk image...
	I0828 10:36:22.444046    4415 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:36:22.444232    4415 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-env-611000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-env-611000/disk.qcow2
	I0828 10:36:22.453475    4415 main.go:141] libmachine: STDOUT: 
	I0828 10:36:22.453494    4415 main.go:141] libmachine: STDERR: 
	I0828 10:36:22.453537    4415 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-env-611000/disk.qcow2 +20000M
	I0828 10:36:22.461355    4415 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:36:22.461378    4415 main.go:141] libmachine: STDERR: 
	I0828 10:36:22.461391    4415 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-env-611000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-env-611000/disk.qcow2
	I0828 10:36:22.461395    4415 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:36:22.461405    4415 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:36:22.461432    4415 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-env-611000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-env-611000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-env-611000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:c1:e5:a4:a5:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/force-systemd-env-611000/disk.qcow2
	I0828 10:36:22.463034    4415 main.go:141] libmachine: STDOUT: 
	I0828 10:36:22.463050    4415 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:36:22.463062    4415 client.go:171] duration metric: took 232.906333ms to LocalClient.Create
	I0828 10:36:24.465262    4415 start.go:128] duration metric: took 2.302617833s to createHost
	I0828 10:36:24.465339    4415 start.go:83] releasing machines lock for "force-systemd-env-611000", held for 2.303109125s
	W0828 10:36:24.465732    4415 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-611000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-611000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:36:24.485607    4415 out.go:201] 
	W0828 10:36:24.492280    4415 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:36:24.492315    4415 out.go:270] * 
	* 
	W0828 10:36:24.494803    4415 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:36:24.504257    4415 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-611000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-611000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-611000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.097084ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-611000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-611000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-611000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-28 10:36:24.60035 -0700 PDT m=+2756.520732709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-611000 -n force-systemd-env-611000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-611000 -n force-systemd-env-611000: exit status 7 (35.6175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-611000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-611000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-611000
--- FAIL: TestForceSystemdEnv (10.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (35.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-429000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-429000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-h4nz2" [66d5b42c-f19b-4b1e-8554-9f400ec16142] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-h4nz2" [66d5b42c-f19b-4b1e-8554-9f400ec16142] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.010238875s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:31074
functional_test.go:1661: error fetching http://192.168.105.4:31074: Get "http://192.168.105.4:31074": dial tcp 192.168.105.4:31074: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31074: Get "http://192.168.105.4:31074": dial tcp 192.168.105.4:31074: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31074: Get "http://192.168.105.4:31074": dial tcp 192.168.105.4:31074: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31074: Get "http://192.168.105.4:31074": dial tcp 192.168.105.4:31074: connect: connection refused
E0828 10:09:20.612304    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1661: error fetching http://192.168.105.4:31074: Get "http://192.168.105.4:31074": dial tcp 192.168.105.4:31074: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31074: Get "http://192.168.105.4:31074": dial tcp 192.168.105.4:31074: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31074: Get "http://192.168.105.4:31074": dial tcp 192.168.105.4:31074: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31074: Get "http://192.168.105.4:31074": dial tcp 192.168.105.4:31074: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:31074: Get "http://192.168.105.4:31074": dial tcp 192.168.105.4:31074: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-429000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-h4nz2
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-429000/192.168.105.4
Start Time:       Wed, 28 Aug 2024 10:09:05 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://01e32ab77b91dbc777695120e97935216a6ccf5c5690e2d20f0d5103fe94eec9
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Wed, 28 Aug 2024 10:09:23 -0700
Finished:     Wed, 28 Aug 2024 10:09:23 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d8lsm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-d8lsm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  33s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-h4nz2 to functional-429000
Normal   Pulled     16s (x3 over 34s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    16s (x3 over 34s)  kubelet            Created container echoserver-arm
Normal   Started    16s (x3 over 33s)  kubelet            Started container echoserver-arm
Warning  BackOff    4s (x3 over 32s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-h4nz2_default(66d5b42c-f19b-4b1e-8554-9f400ec16142)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-429000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-429000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.108.62.67
IPs:                      10.108.62.67
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31074/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-429000 -n functional-429000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                        Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| service | functional-429000                                                                                                   | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT | 28 Aug 24 10:09 PDT |
	|         | service hello-node --url                                                                                            |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                    |                   |         |         |                     |                     |
	| service | functional-429000 service                                                                                           | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT | 28 Aug 24 10:09 PDT |
	|         | hello-node --url                                                                                                    |                   |         |         |                     |                     |
	| addons  | functional-429000 addons list                                                                                       | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT | 28 Aug 24 10:09 PDT |
	| addons  | functional-429000 addons list                                                                                       | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT | 28 Aug 24 10:09 PDT |
	|         | -o json                                                                                                             |                   |         |         |                     |                     |
	| service | functional-429000 service                                                                                           | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT | 28 Aug 24 10:09 PDT |
	|         | hello-node-connect --url                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-429000 ssh findmnt                                                                                       | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| mount   | -p functional-429000                                                                                                | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1535466881/001:/mount-9p     |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh     | functional-429000 ssh findmnt                                                                                       | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT | 28 Aug 24 10:09 PDT |
	|         | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh     | functional-429000 ssh -- ls                                                                                         | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT | 28 Aug 24 10:09 PDT |
	|         | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh     | functional-429000 ssh cat                                                                                           | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT | 28 Aug 24 10:09 PDT |
	|         | /mount-9p/test-1724864969705486000                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-429000 ssh stat                                                                                          | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT | 28 Aug 24 10:09 PDT |
	|         | /mount-9p/created-by-test                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-429000 ssh stat                                                                                          | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT | 28 Aug 24 10:09 PDT |
	|         | /mount-9p/created-by-pod                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-429000 ssh sudo                                                                                          | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT | 28 Aug 24 10:09 PDT |
	|         | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| ssh     | functional-429000 ssh findmnt                                                                                       | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| mount   | -p functional-429000                                                                                                | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port603445905/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                                                                 |                   |         |         |                     |                     |
	| ssh     | functional-429000 ssh findmnt                                                                                       | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT | 28 Aug 24 10:09 PDT |
	|         | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh     | functional-429000 ssh -- ls                                                                                         | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT | 28 Aug 24 10:09 PDT |
	|         | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh     | functional-429000 ssh sudo                                                                                          | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT |                     |
	|         | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| mount   | -p functional-429000                                                                                                | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3999750471/001:/mount1  |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount   | -p functional-429000                                                                                                | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3999750471/001:/mount2  |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount   | -p functional-429000                                                                                                | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3999750471/001:/mount3  |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh     | functional-429000 ssh findmnt                                                                                       | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT |                     |
	|         | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-429000 ssh findmnt                                                                                       | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT | 28 Aug 24 10:09 PDT |
	|         | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-429000 ssh findmnt                                                                                       | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT | 28 Aug 24 10:09 PDT |
	|         | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-429000 ssh findmnt                                                                                       | functional-429000 | jenkins | v1.33.1 | 28 Aug 24 10:09 PDT |                     |
	|         | -T /mount3                                                                                                          |                   |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 10:08:11
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 10:08:11.015076    2561 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:08:11.015200    2561 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:08:11.015202    2561 out.go:358] Setting ErrFile to fd 2...
	I0828 10:08:11.015204    2561 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:08:11.015333    2561 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:08:11.016318    2561 out.go:352] Setting JSON to false
	I0828 10:08:11.032689    2561 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2254,"bootTime":1724862637,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:08:11.032765    2561 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:08:11.037230    2561 out.go:177] * [functional-429000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:08:11.046218    2561 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:08:11.046279    2561 notify.go:220] Checking for updates...
	I0828 10:08:11.055088    2561 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:08:11.059089    2561 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:08:11.062150    2561 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:08:11.065236    2561 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:08:11.068174    2561 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:08:11.071467    2561 config.go:182] Loaded profile config "functional-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:08:11.071526    2561 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:08:11.076146    2561 out.go:177] * Using the qemu2 driver based on existing profile
	I0828 10:08:11.083175    2561 start.go:297] selected driver: qemu2
	I0828 10:08:11.083181    2561 start.go:901] validating driver "qemu2" against &{Name:functional-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-429000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:08:11.083242    2561 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:08:11.085446    2561 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:08:11.085470    2561 cni.go:84] Creating CNI manager for ""
	I0828 10:08:11.085476    2561 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:08:11.085522    2561 start.go:340] cluster config:
	{Name:functional-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-429000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:08:11.088932    2561 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:08:11.097109    2561 out.go:177] * Starting "functional-429000" primary control-plane node in "functional-429000" cluster
	I0828 10:08:11.101055    2561 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:08:11.101067    2561 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:08:11.101075    2561 cache.go:56] Caching tarball of preloaded images
	I0828 10:08:11.101130    2561 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:08:11.101134    2561 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:08:11.101184    2561 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/config.json ...
	I0828 10:08:11.101645    2561 start.go:360] acquireMachinesLock for functional-429000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:08:11.101677    2561 start.go:364] duration metric: took 28.125µs to acquireMachinesLock for "functional-429000"
	I0828 10:08:11.101685    2561 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:08:11.101690    2561 fix.go:54] fixHost starting: 
	I0828 10:08:11.102351    2561 fix.go:112] recreateIfNeeded on functional-429000: state=Running err=<nil>
	W0828 10:08:11.102357    2561 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:08:11.106097    2561 out.go:177] * Updating the running qemu2 "functional-429000" VM ...
	I0828 10:08:11.110165    2561 machine.go:93] provisionDockerMachine start ...
	I0828 10:08:11.110207    2561 main.go:141] libmachine: Using SSH client type: native
	I0828 10:08:11.110361    2561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d4c5a0] 0x102d4ee00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0828 10:08:11.110364    2561 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 10:08:11.158821    2561 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-429000
	
	I0828 10:08:11.158831    2561 buildroot.go:166] provisioning hostname "functional-429000"
	I0828 10:08:11.158863    2561 main.go:141] libmachine: Using SSH client type: native
	I0828 10:08:11.158971    2561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d4c5a0] 0x102d4ee00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0828 10:08:11.158975    2561 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-429000 && echo "functional-429000" | sudo tee /etc/hostname
	I0828 10:08:11.209883    2561 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-429000
	
	I0828 10:08:11.209924    2561 main.go:141] libmachine: Using SSH client type: native
	I0828 10:08:11.210041    2561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d4c5a0] 0x102d4ee00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0828 10:08:11.210047    2561 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-429000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-429000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-429000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 10:08:11.257349    2561 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 10:08:11.257358    2561 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19529-1176/.minikube CaCertPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19529-1176/.minikube}
	I0828 10:08:11.257364    2561 buildroot.go:174] setting up certificates
	I0828 10:08:11.257367    2561 provision.go:84] configureAuth start
	I0828 10:08:11.257372    2561 provision.go:143] copyHostCerts
	I0828 10:08:11.257442    2561 exec_runner.go:144] found /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.pem, removing ...
	I0828 10:08:11.257447    2561 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.pem
	I0828 10:08:11.257572    2561 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.pem (1078 bytes)
	I0828 10:08:11.257751    2561 exec_runner.go:144] found /Users/jenkins/minikube-integration/19529-1176/.minikube/cert.pem, removing ...
	I0828 10:08:11.257753    2561 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19529-1176/.minikube/cert.pem
	I0828 10:08:11.257875    2561 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19529-1176/.minikube/cert.pem (1123 bytes)
	I0828 10:08:11.258018    2561 exec_runner.go:144] found /Users/jenkins/minikube-integration/19529-1176/.minikube/key.pem, removing ...
	I0828 10:08:11.258020    2561 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19529-1176/.minikube/key.pem
	I0828 10:08:11.258085    2561 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19529-1176/.minikube/key.pem (1679 bytes)
	I0828 10:08:11.258179    2561 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca-key.pem org=jenkins.functional-429000 san=[127.0.0.1 192.168.105.4 functional-429000 localhost minikube]
	I0828 10:08:11.298425    2561 provision.go:177] copyRemoteCerts
	I0828 10:08:11.298469    2561 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 10:08:11.298475    2561 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/functional-429000/id_rsa Username:docker}
	I0828 10:08:11.325271    2561 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 10:08:11.334396    2561 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0828 10:08:11.342029    2561 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 10:08:11.350118    2561 provision.go:87] duration metric: took 92.748166ms to configureAuth
	I0828 10:08:11.350124    2561 buildroot.go:189] setting minikube options for container-runtime
	I0828 10:08:11.350236    2561 config.go:182] Loaded profile config "functional-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:08:11.350272    2561 main.go:141] libmachine: Using SSH client type: native
	I0828 10:08:11.350361    2561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d4c5a0] 0x102d4ee00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0828 10:08:11.350364    2561 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0828 10:08:11.397885    2561 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0828 10:08:11.397890    2561 buildroot.go:70] root file system type: tmpfs
	I0828 10:08:11.397937    2561 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0828 10:08:11.398002    2561 main.go:141] libmachine: Using SSH client type: native
	I0828 10:08:11.398117    2561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d4c5a0] 0x102d4ee00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0828 10:08:11.398147    2561 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0828 10:08:11.452275    2561 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0828 10:08:11.452329    2561 main.go:141] libmachine: Using SSH client type: native
	I0828 10:08:11.452449    2561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d4c5a0] 0x102d4ee00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0828 10:08:11.452455    2561 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0828 10:08:11.501167    2561 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 10:08:11.501173    2561 machine.go:96] duration metric: took 391.012833ms to provisionDockerMachine
	I0828 10:08:11.501178    2561 start.go:293] postStartSetup for "functional-429000" (driver="qemu2")
	I0828 10:08:11.501184    2561 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 10:08:11.501236    2561 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 10:08:11.501243    2561 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/functional-429000/id_rsa Username:docker}
	I0828 10:08:11.528469    2561 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 10:08:11.530038    2561 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 10:08:11.530043    2561 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19529-1176/.minikube/addons for local assets ...
	I0828 10:08:11.530126    2561 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19529-1176/.minikube/files for local assets ...
	I0828 10:08:11.530241    2561 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19529-1176/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0828 10:08:11.530353    2561 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19529-1176/.minikube/files/etc/test/nested/copy/1678/hosts -> hosts in /etc/test/nested/copy/1678
	I0828 10:08:11.530383    2561 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1678
	I0828 10:08:11.533685    2561 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0828 10:08:11.541961    2561 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/files/etc/test/nested/copy/1678/hosts --> /etc/test/nested/copy/1678/hosts (40 bytes)
	I0828 10:08:11.550132    2561 start.go:296] duration metric: took 48.950959ms for postStartSetup
	I0828 10:08:11.550146    2561 fix.go:56] duration metric: took 448.464583ms for fixHost
	I0828 10:08:11.550187    2561 main.go:141] libmachine: Using SSH client type: native
	I0828 10:08:11.550300    2561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d4c5a0] 0x102d4ee00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0828 10:08:11.550303    2561 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 10:08:11.600670    2561 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724864891.615319332
	
	I0828 10:08:11.600674    2561 fix.go:216] guest clock: 1724864891.615319332
	I0828 10:08:11.600678    2561 fix.go:229] Guest: 2024-08-28 10:08:11.615319332 -0700 PDT Remote: 2024-08-28 10:08:11.550147 -0700 PDT m=+0.553809751 (delta=65.172332ms)
	I0828 10:08:11.600687    2561 fix.go:200] guest clock delta is within tolerance: 65.172332ms
	I0828 10:08:11.600689    2561 start.go:83] releasing machines lock for "functional-429000", held for 499.018792ms
	I0828 10:08:11.600971    2561 ssh_runner.go:195] Run: cat /version.json
	I0828 10:08:11.600976    2561 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/functional-429000/id_rsa Username:docker}
	I0828 10:08:11.600986    2561 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 10:08:11.600999    2561 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/functional-429000/id_rsa Username:docker}
	I0828 10:08:11.674263    2561 ssh_runner.go:195] Run: systemctl --version
	I0828 10:08:11.676317    2561 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 10:08:11.678132    2561 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 10:08:11.678157    2561 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 10:08:11.681394    2561 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0828 10:08:11.681399    2561 start.go:495] detecting cgroup driver to use...
	I0828 10:08:11.681461    2561 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 10:08:11.687995    2561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0828 10:08:11.692011    2561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0828 10:08:11.695916    2561 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0828 10:08:11.695936    2561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0828 10:08:11.699890    2561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0828 10:08:11.704099    2561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0828 10:08:11.708162    2561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0828 10:08:11.712133    2561 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 10:08:11.716145    2561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0828 10:08:11.719859    2561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0828 10:08:11.724005    2561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0828 10:08:11.728095    2561 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 10:08:11.731728    2561 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 10:08:11.735157    2561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:08:11.840503    2561 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0828 10:08:11.848046    2561 start.go:495] detecting cgroup driver to use...
	I0828 10:08:11.848107    2561 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0828 10:08:11.855064    2561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 10:08:11.860747    2561 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 10:08:11.868983    2561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 10:08:11.874471    2561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0828 10:08:11.879350    2561 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 10:08:11.886146    2561 ssh_runner.go:195] Run: which cri-dockerd
	I0828 10:08:11.887561    2561 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0828 10:08:11.891342    2561 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0828 10:08:11.897382    2561 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0828 10:08:12.001647    2561 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0828 10:08:12.109300    2561 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0828 10:08:12.109346    2561 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0828 10:08:12.115611    2561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:08:12.221701    2561 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0828 10:08:24.562675    2561 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.3412135s)
	I0828 10:08:24.562738    2561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0828 10:08:24.568762    2561 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0828 10:08:24.576172    2561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0828 10:08:24.582136    2561 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0828 10:08:24.659863    2561 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0828 10:08:24.748736    2561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:08:24.832963    2561 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0828 10:08:24.839973    2561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0828 10:08:24.846044    2561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:08:24.932882    2561 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0828 10:08:24.961560    2561 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0828 10:08:24.961627    2561 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0828 10:08:24.963864    2561 start.go:563] Will wait 60s for crictl version
	I0828 10:08:24.963918    2561 ssh_runner.go:195] Run: which crictl
	I0828 10:08:24.965282    2561 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 10:08:24.982126    2561 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0828 10:08:24.982208    2561 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0828 10:08:24.993579    2561 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0828 10:08:25.009291    2561 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0828 10:08:25.009422    2561 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0828 10:08:25.015231    2561 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0828 10:08:25.019208    2561 kubeadm.go:883] updating cluster {Name:functional-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.0 ClusterName:functional-429000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 10:08:25.019252    2561 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:08:25.019292    2561 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0828 10:08:25.025231    2561 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-429000
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0828 10:08:25.025235    2561 docker.go:615] Images already preloaded, skipping extraction
	I0828 10:08:25.025286    2561 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0828 10:08:25.030957    2561 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-429000
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0828 10:08:25.030964    2561 cache_images.go:84] Images are preloaded, skipping loading
	I0828 10:08:25.030968    2561 kubeadm.go:934] updating node { 192.168.105.4 8441 v1.31.0 docker true true} ...
	I0828 10:08:25.031017    2561 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-429000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:functional-429000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 10:08:25.031062    2561 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0828 10:08:25.047594    2561 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0828 10:08:25.047648    2561 cni.go:84] Creating CNI manager for ""
	I0828 10:08:25.047655    2561 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:08:25.047659    2561 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 10:08:25.047668    2561 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-429000 NodeName:functional-429000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 10:08:25.047725    2561 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-429000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 10:08:25.047785    2561 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 10:08:25.051618    2561 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 10:08:25.051643    2561 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 10:08:25.055331    2561 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0828 10:08:25.061336    2561 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 10:08:25.067100    2561 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2012 bytes)
	I0828 10:08:25.073373    2561 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0828 10:08:25.074634    2561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:08:25.160897    2561 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 10:08:25.166753    2561 certs.go:68] Setting up /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000 for IP: 192.168.105.4
	I0828 10:08:25.166756    2561 certs.go:194] generating shared ca certs ...
	I0828 10:08:25.166764    2561 certs.go:226] acquiring lock for ca certs: {Name:mkf861e7f19b199967d33246b8c25f60e0670f76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:08:25.166916    2561 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.key
	I0828 10:08:25.166967    2561 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/proxy-client-ca.key
	I0828 10:08:25.166972    2561 certs.go:256] generating profile certs ...
	I0828 10:08:25.167036    2561 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.key
	I0828 10:08:25.167096    2561 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/apiserver.key.1c3f628b
	I0828 10:08:25.167138    2561 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/proxy-client.key
	I0828 10:08:25.167280    2561 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/1678.pem (1338 bytes)
	W0828 10:08:25.167306    2561 certs.go:480] ignoring /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/1678_empty.pem, impossibly tiny 0 bytes
	I0828 10:08:25.167310    2561 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 10:08:25.167328    2561 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem (1078 bytes)
	I0828 10:08:25.167345    2561 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem (1123 bytes)
	I0828 10:08:25.167363    2561 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/key.pem (1679 bytes)
	I0828 10:08:25.167404    2561 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/files/etc/ssl/certs/16782.pem (1708 bytes)
	I0828 10:08:25.167726    2561 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 10:08:25.176457    2561 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 10:08:25.184479    2561 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 10:08:25.192401    2561 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 10:08:25.200358    2561 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0828 10:08:25.208415    2561 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 10:08:25.216372    2561 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 10:08:25.224639    2561 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0828 10:08:25.232686    2561 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/1678.pem --> /usr/share/ca-certificates/1678.pem (1338 bytes)
	I0828 10:08:25.240614    2561 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/files/etc/ssl/certs/16782.pem --> /usr/share/ca-certificates/16782.pem (1708 bytes)
	I0828 10:08:25.248484    2561 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 10:08:25.256541    2561 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 10:08:25.262603    2561 ssh_runner.go:195] Run: openssl version
	I0828 10:08:25.264596    2561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 10:08:25.268316    2561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 10:08:25.269779    2561 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:51 /usr/share/ca-certificates/minikubeCA.pem
	I0828 10:08:25.269797    2561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 10:08:25.271669    2561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 10:08:25.275453    2561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1678.pem && ln -fs /usr/share/ca-certificates/1678.pem /etc/ssl/certs/1678.pem"
	I0828 10:08:25.279346    2561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1678.pem
	I0828 10:08:25.280784    2561 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:06 /usr/share/ca-certificates/1678.pem
	I0828 10:08:25.280803    2561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1678.pem
	I0828 10:08:25.282946    2561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1678.pem /etc/ssl/certs/51391683.0"
	I0828 10:08:25.286718    2561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16782.pem && ln -fs /usr/share/ca-certificates/16782.pem /etc/ssl/certs/16782.pem"
	I0828 10:08:25.290654    2561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16782.pem
	I0828 10:08:25.292228    2561 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:06 /usr/share/ca-certificates/16782.pem
	I0828 10:08:25.292247    2561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16782.pem
	I0828 10:08:25.294335    2561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16782.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 10:08:25.297650    2561 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 10:08:25.299329    2561 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 10:08:25.301644    2561 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 10:08:25.303645    2561 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 10:08:25.305685    2561 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 10:08:25.307667    2561 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 10:08:25.309700    2561 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 10:08:25.311747    2561 kubeadm.go:392] StartCluster: {Name:functional-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.0 ClusterName:functional-429000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:08:25.311810    2561 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0828 10:08:25.318902    2561 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 10:08:25.322448    2561 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 10:08:25.322451    2561 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 10:08:25.322473    2561 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 10:08:25.325653    2561 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 10:08:25.325941    2561 kubeconfig.go:125] found "functional-429000" server: "https://192.168.105.4:8441"
	I0828 10:08:25.326574    2561 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 10:08:25.329925    2561 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0828 10:08:25.329928    2561 kubeadm.go:1160] stopping kube-system containers ...
	I0828 10:08:25.329963    2561 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0828 10:08:25.337142    2561 docker.go:483] Stopping containers: [cf8a00eee288 e51cb418737a e1e32638b913 0ee0d4b86903 bfe4ab520b8a ebceef4d128e f42e94279c85 bd858d02900a cd130765c2c6 720fb6aa7a92 b69e5f19371f 2c47e4bb7299 4f529c868133 5b483efc3791 f836b138c7d4 d06ffa81df9a 0e741f38b64a 15141293de77 b429eedbcee9 6723d925d9cf 1a4f372855b7 fd6541eaccea a427a66a19c5 55fcbb1bd177 c30aa3f1d07d 4a6ecb25854b 75e879e55c87 7d779e2ad832 21589813b5ce]
	I0828 10:08:25.337193    2561 ssh_runner.go:195] Run: docker stop cf8a00eee288 e51cb418737a e1e32638b913 0ee0d4b86903 bfe4ab520b8a ebceef4d128e f42e94279c85 bd858d02900a cd130765c2c6 720fb6aa7a92 b69e5f19371f 2c47e4bb7299 4f529c868133 5b483efc3791 f836b138c7d4 d06ffa81df9a 0e741f38b64a 15141293de77 b429eedbcee9 6723d925d9cf 1a4f372855b7 fd6541eaccea a427a66a19c5 55fcbb1bd177 c30aa3f1d07d 4a6ecb25854b 75e879e55c87 7d779e2ad832 21589813b5ce
	I0828 10:08:25.344362    2561 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 10:08:25.460947    2561 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 10:08:25.467007    2561 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Aug 28 17:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Aug 28 17:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Aug 28 17:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Aug 28 17:07 /etc/kubernetes/scheduler.conf
	
	I0828 10:08:25.467049    2561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0828 10:08:25.472560    2561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0828 10:08:25.477266    2561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0828 10:08:25.481571    2561 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0828 10:08:25.481598    2561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 10:08:25.485783    2561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0828 10:08:25.489740    2561 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0828 10:08:25.489768    2561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 10:08:25.493237    2561 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 10:08:25.496791    2561 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 10:08:25.514530    2561 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 10:08:25.979565    2561 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 10:08:26.103671    2561 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 10:08:26.137157    2561 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 10:08:26.170184    2561 api_server.go:52] waiting for apiserver process to appear ...
	I0828 10:08:26.170262    2561 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 10:08:26.672307    2561 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 10:08:27.172308    2561 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 10:08:27.177843    2561 api_server.go:72] duration metric: took 1.007679s to wait for apiserver process to appear ...
	I0828 10:08:27.177850    2561 api_server.go:88] waiting for apiserver healthz status ...
	I0828 10:08:27.177859    2561 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0828 10:08:29.251779    2561 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 10:08:29.251788    2561 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 10:08:29.251793    2561 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0828 10:08:29.310207    2561 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 10:08:29.310218    2561 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 10:08:29.679991    2561 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0828 10:08:29.693069    2561 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 10:08:29.693096    2561 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 10:08:30.179898    2561 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0828 10:08:30.184842    2561 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 10:08:30.184852    2561 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 10:08:30.679953    2561 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0828 10:08:30.683999    2561 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0828 10:08:30.689010    2561 api_server.go:141] control plane version: v1.31.0
	I0828 10:08:30.689020    2561 api_server.go:131] duration metric: took 3.511239625s to wait for apiserver health ...
	I0828 10:08:30.689025    2561 cni.go:84] Creating CNI manager for ""
	I0828 10:08:30.689031    2561 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:08:30.735612    2561 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 10:08:30.739576    2561 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 10:08:30.743619    2561 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 10:08:30.751366    2561 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 10:08:30.755810    2561 system_pods.go:59] 7 kube-system pods found
	I0828 10:08:30.755819    2561 system_pods.go:61] "coredns-6f6b679f8f-n77h6" [c9a3a1bd-7e99-4d3d-9e0e-53255f6bd92b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 10:08:30.755822    2561 system_pods.go:61] "etcd-functional-429000" [0b9472ea-b347-48a8-a61b-87cc914140fd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0828 10:08:30.755824    2561 system_pods.go:61] "kube-apiserver-functional-429000" [0972e216-bbf6-44d1-8328-a8798f50c0f5] Pending
	I0828 10:08:30.755826    2561 system_pods.go:61] "kube-controller-manager-functional-429000" [882d6f72-6946-404b-ad0f-f011d4f11f13] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0828 10:08:30.755831    2561 system_pods.go:61] "kube-proxy-gxcrt" [5e75b5ba-87fe-424d-9bb5-016816b55df8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0828 10:08:30.755834    2561 system_pods.go:61] "kube-scheduler-functional-429000" [51cb3bb9-627c-4927-9b7c-9869aa7e09f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0828 10:08:30.755836    2561 system_pods.go:61] "storage-provisioner" [b2a1970f-80ce-4025-ab2f-7caf3b7ea2e8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0828 10:08:30.755838    2561 system_pods.go:74] duration metric: took 4.468167ms to wait for pod list to return data ...
	I0828 10:08:30.755841    2561 node_conditions.go:102] verifying NodePressure condition ...
	I0828 10:08:30.757417    2561 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 10:08:30.757422    2561 node_conditions.go:123] node cpu capacity is 2
	I0828 10:08:30.757426    2561 node_conditions.go:105] duration metric: took 1.583208ms to run NodePressure ...
	I0828 10:08:30.757432    2561 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 10:08:30.978156    2561 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0828 10:08:30.981113    2561 kubeadm.go:739] kubelet initialised
	I0828 10:08:30.981117    2561 kubeadm.go:740] duration metric: took 2.951041ms waiting for restarted kubelet to initialise ...
	I0828 10:08:30.981122    2561 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 10:08:30.984160    2561 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-n77h6" in "kube-system" namespace to be "Ready" ...
	I0828 10:08:32.989257    2561 pod_ready.go:103] pod "coredns-6f6b679f8f-n77h6" in "kube-system" namespace has status "Ready":"False"
	I0828 10:08:34.996154    2561 pod_ready.go:103] pod "coredns-6f6b679f8f-n77h6" in "kube-system" namespace has status "Ready":"False"
	I0828 10:08:36.999772    2561 pod_ready.go:103] pod "coredns-6f6b679f8f-n77h6" in "kube-system" namespace has status "Ready":"False"
	I0828 10:08:39.496530    2561 pod_ready.go:103] pod "coredns-6f6b679f8f-n77h6" in "kube-system" namespace has status "Ready":"False"
	I0828 10:08:40.490550    2561 pod_ready.go:93] pod "coredns-6f6b679f8f-n77h6" in "kube-system" namespace has status "Ready":"True"
	I0828 10:08:40.490560    2561 pod_ready.go:82] duration metric: took 9.506589125s for pod "coredns-6f6b679f8f-n77h6" in "kube-system" namespace to be "Ready" ...
	I0828 10:08:40.490566    2561 pod_ready.go:79] waiting up to 4m0s for pod "etcd-functional-429000" in "kube-system" namespace to be "Ready" ...
	I0828 10:08:40.493968    2561 pod_ready.go:93] pod "etcd-functional-429000" in "kube-system" namespace has status "Ready":"True"
	I0828 10:08:40.493973    2561 pod_ready.go:82] duration metric: took 3.40225ms for pod "etcd-functional-429000" in "kube-system" namespace to be "Ready" ...
	I0828 10:08:40.493978    2561 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-functional-429000" in "kube-system" namespace to be "Ready" ...
	I0828 10:08:40.496875    2561 pod_ready.go:93] pod "kube-apiserver-functional-429000" in "kube-system" namespace has status "Ready":"True"
	I0828 10:08:40.496880    2561 pod_ready.go:82] duration metric: took 2.897375ms for pod "kube-apiserver-functional-429000" in "kube-system" namespace to be "Ready" ...
	I0828 10:08:40.496884    2561 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-functional-429000" in "kube-system" namespace to be "Ready" ...
	I0828 10:08:40.499526    2561 pod_ready.go:93] pod "kube-controller-manager-functional-429000" in "kube-system" namespace has status "Ready":"True"
	I0828 10:08:40.499530    2561 pod_ready.go:82] duration metric: took 2.642667ms for pod "kube-controller-manager-functional-429000" in "kube-system" namespace to be "Ready" ...
	I0828 10:08:40.499535    2561 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-gxcrt" in "kube-system" namespace to be "Ready" ...
	I0828 10:08:40.502184    2561 pod_ready.go:93] pod "kube-proxy-gxcrt" in "kube-system" namespace has status "Ready":"True"
	I0828 10:08:40.502190    2561 pod_ready.go:82] duration metric: took 2.652667ms for pod "kube-proxy-gxcrt" in "kube-system" namespace to be "Ready" ...
	I0828 10:08:40.502195    2561 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-functional-429000" in "kube-system" namespace to be "Ready" ...
	I0828 10:08:40.895094    2561 pod_ready.go:93] pod "kube-scheduler-functional-429000" in "kube-system" namespace has status "Ready":"True"
	I0828 10:08:40.895119    2561 pod_ready.go:82] duration metric: took 392.91925ms for pod "kube-scheduler-functional-429000" in "kube-system" namespace to be "Ready" ...
	I0828 10:08:40.895145    2561 pod_ready.go:39] duration metric: took 9.914209458s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 10:08:40.895195    2561 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 10:08:40.911217    2561 ops.go:34] apiserver oom_adj: -16
	I0828 10:08:40.911236    2561 kubeadm.go:597] duration metric: took 15.589093541s to restartPrimaryControlPlane
	I0828 10:08:40.911249    2561 kubeadm.go:394] duration metric: took 15.599819167s to StartCluster
	I0828 10:08:40.911278    2561 settings.go:142] acquiring lock: {Name:mk584f5f183a19e050e7184c0c9e70ea26430337 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:08:40.911628    2561 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:08:40.912773    2561 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/kubeconfig: {Name:mke8b729c65a2ae9e4d9042dc78e2127479f8609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:08:40.913573    2561 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:08:40.913591    2561 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 10:08:40.913692    2561 addons.go:69] Setting storage-provisioner=true in profile "functional-429000"
	I0828 10:08:40.913726    2561 addons.go:234] Setting addon storage-provisioner=true in "functional-429000"
	W0828 10:08:40.913732    2561 addons.go:243] addon storage-provisioner should already be in state true
	I0828 10:08:40.913766    2561 host.go:66] Checking if "functional-429000" exists ...
	I0828 10:08:40.913796    2561 config.go:182] Loaded profile config "functional-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:08:40.913786    2561 addons.go:69] Setting default-storageclass=true in profile "functional-429000"
	I0828 10:08:40.913835    2561 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-429000"
	I0828 10:08:40.916538    2561 addons.go:234] Setting addon default-storageclass=true in "functional-429000"
	W0828 10:08:40.916550    2561 addons.go:243] addon default-storageclass should already be in state true
	I0828 10:08:40.916580    2561 host.go:66] Checking if "functional-429000" exists ...
	I0828 10:08:40.917640    2561 out.go:177] * Verifying Kubernetes components...
	I0828 10:08:40.922706    2561 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 10:08:40.922720    2561 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 10:08:40.922740    2561 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/functional-429000/id_rsa Username:docker}
	I0828 10:08:40.924510    2561 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 10:08:40.924611    2561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:08:40.928725    2561 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 10:08:40.928734    2561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 10:08:40.928747    2561 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/functional-429000/id_rsa Username:docker}
	I0828 10:08:41.044022    2561 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 10:08:41.049734    2561 node_ready.go:35] waiting up to 6m0s for node "functional-429000" to be "Ready" ...
	I0828 10:08:41.088211    2561 node_ready.go:49] node "functional-429000" has status "Ready":"True"
	I0828 10:08:41.088220    2561 node_ready.go:38] duration metric: took 38.475292ms for node "functional-429000" to be "Ready" ...
	I0828 10:08:41.088223    2561 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 10:08:41.095131    2561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 10:08:41.098184    2561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 10:08:41.291171    2561 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-n77h6" in "kube-system" namespace to be "Ready" ...
	I0828 10:08:41.427220    2561 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0828 10:08:41.435097    2561 addons.go:510] duration metric: took 521.532583ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0828 10:08:41.691689    2561 pod_ready.go:93] pod "coredns-6f6b679f8f-n77h6" in "kube-system" namespace has status "Ready":"True"
	I0828 10:08:41.691719    2561 pod_ready.go:82] duration metric: took 400.530833ms for pod "coredns-6f6b679f8f-n77h6" in "kube-system" namespace to be "Ready" ...
	I0828 10:08:41.691731    2561 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-429000" in "kube-system" namespace to be "Ready" ...
	I0828 10:08:42.095128    2561 pod_ready.go:93] pod "etcd-functional-429000" in "kube-system" namespace has status "Ready":"True"
	I0828 10:08:42.095148    2561 pod_ready.go:82] duration metric: took 403.413583ms for pod "etcd-functional-429000" in "kube-system" namespace to be "Ready" ...
	I0828 10:08:42.095166    2561 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-429000" in "kube-system" namespace to be "Ready" ...
	I0828 10:08:42.496875    2561 pod_ready.go:93] pod "kube-apiserver-functional-429000" in "kube-system" namespace has status "Ready":"True"
	I0828 10:08:42.496905    2561 pod_ready.go:82] duration metric: took 401.728208ms for pod "kube-apiserver-functional-429000" in "kube-system" namespace to be "Ready" ...
	I0828 10:08:42.496925    2561 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-429000" in "kube-system" namespace to be "Ready" ...
	I0828 10:08:42.890993    2561 pod_ready.go:93] pod "kube-controller-manager-functional-429000" in "kube-system" namespace has status "Ready":"True"
	I0828 10:08:42.891001    2561 pod_ready.go:82] duration metric: took 394.07225ms for pod "kube-controller-manager-functional-429000" in "kube-system" namespace to be "Ready" ...
	I0828 10:08:42.891009    2561 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gxcrt" in "kube-system" namespace to be "Ready" ...
	I0828 10:08:43.296062    2561 pod_ready.go:93] pod "kube-proxy-gxcrt" in "kube-system" namespace has status "Ready":"True"
	I0828 10:08:43.296077    2561 pod_ready.go:82] duration metric: took 405.068834ms for pod "kube-proxy-gxcrt" in "kube-system" namespace to be "Ready" ...
	I0828 10:08:43.296087    2561 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-429000" in "kube-system" namespace to be "Ready" ...
	I0828 10:08:43.694549    2561 pod_ready.go:93] pod "kube-scheduler-functional-429000" in "kube-system" namespace has status "Ready":"True"
	I0828 10:08:43.694569    2561 pod_ready.go:82] duration metric: took 398.481375ms for pod "kube-scheduler-functional-429000" in "kube-system" namespace to be "Ready" ...
	I0828 10:08:43.694591    2561 pod_ready.go:39] duration metric: took 2.606412167s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 10:08:43.694644    2561 api_server.go:52] waiting for apiserver process to appear ...
	I0828 10:08:43.694955    2561 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 10:08:43.715638    2561 api_server.go:72] duration metric: took 2.802095209s to wait for apiserver process to appear ...
	I0828 10:08:43.715647    2561 api_server.go:88] waiting for apiserver healthz status ...
	I0828 10:08:43.715661    2561 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0828 10:08:43.722364    2561 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0828 10:08:43.723429    2561 api_server.go:141] control plane version: v1.31.0
	I0828 10:08:43.723439    2561 api_server.go:131] duration metric: took 7.788042ms to wait for apiserver health ...
	I0828 10:08:43.723446    2561 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 10:08:43.898725    2561 system_pods.go:59] 7 kube-system pods found
	I0828 10:08:43.898744    2561 system_pods.go:61] "coredns-6f6b679f8f-n77h6" [c9a3a1bd-7e99-4d3d-9e0e-53255f6bd92b] Running
	I0828 10:08:43.898750    2561 system_pods.go:61] "etcd-functional-429000" [0b9472ea-b347-48a8-a61b-87cc914140fd] Running
	I0828 10:08:43.898756    2561 system_pods.go:61] "kube-apiserver-functional-429000" [0972e216-bbf6-44d1-8328-a8798f50c0f5] Running
	I0828 10:08:43.898764    2561 system_pods.go:61] "kube-controller-manager-functional-429000" [882d6f72-6946-404b-ad0f-f011d4f11f13] Running
	I0828 10:08:43.898769    2561 system_pods.go:61] "kube-proxy-gxcrt" [5e75b5ba-87fe-424d-9bb5-016816b55df8] Running
	I0828 10:08:43.898773    2561 system_pods.go:61] "kube-scheduler-functional-429000" [51cb3bb9-627c-4927-9b7c-9869aa7e09f0] Running
	I0828 10:08:43.898778    2561 system_pods.go:61] "storage-provisioner" [b2a1970f-80ce-4025-ab2f-7caf3b7ea2e8] Running
	I0828 10:08:43.898783    2561 system_pods.go:74] duration metric: took 175.335042ms to wait for pod list to return data ...
	I0828 10:08:43.898790    2561 default_sa.go:34] waiting for default service account to be created ...
	I0828 10:08:44.094961    2561 default_sa.go:45] found service account: "default"
	I0828 10:08:44.094998    2561 default_sa.go:55] duration metric: took 196.197917ms for default service account to be created ...
	I0828 10:08:44.095015    2561 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 10:08:44.303264    2561 system_pods.go:86] 7 kube-system pods found
	I0828 10:08:44.303299    2561 system_pods.go:89] "coredns-6f6b679f8f-n77h6" [c9a3a1bd-7e99-4d3d-9e0e-53255f6bd92b] Running
	I0828 10:08:44.303313    2561 system_pods.go:89] "etcd-functional-429000" [0b9472ea-b347-48a8-a61b-87cc914140fd] Running
	I0828 10:08:44.303319    2561 system_pods.go:89] "kube-apiserver-functional-429000" [0972e216-bbf6-44d1-8328-a8798f50c0f5] Running
	I0828 10:08:44.303328    2561 system_pods.go:89] "kube-controller-manager-functional-429000" [882d6f72-6946-404b-ad0f-f011d4f11f13] Running
	I0828 10:08:44.303344    2561 system_pods.go:89] "kube-proxy-gxcrt" [5e75b5ba-87fe-424d-9bb5-016816b55df8] Running
	I0828 10:08:44.303349    2561 system_pods.go:89] "kube-scheduler-functional-429000" [51cb3bb9-627c-4927-9b7c-9869aa7e09f0] Running
	I0828 10:08:44.303353    2561 system_pods.go:89] "storage-provisioner" [b2a1970f-80ce-4025-ab2f-7caf3b7ea2e8] Running
	I0828 10:08:44.303367    2561 system_pods.go:126] duration metric: took 208.34825ms to wait for k8s-apps to be running ...
	I0828 10:08:44.303382    2561 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 10:08:44.303601    2561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 10:08:44.323365    2561 system_svc.go:56] duration metric: took 19.976417ms WaitForService to wait for kubelet
	I0828 10:08:44.323387    2561 kubeadm.go:582] duration metric: took 3.409854834s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:08:44.323409    2561 node_conditions.go:102] verifying NodePressure condition ...
	I0828 10:08:44.494657    2561 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 10:08:44.494681    2561 node_conditions.go:123] node cpu capacity is 2
	I0828 10:08:44.494706    2561 node_conditions.go:105] duration metric: took 171.291791ms to run NodePressure ...
	I0828 10:08:44.494731    2561 start.go:241] waiting for startup goroutines ...
	I0828 10:08:44.494743    2561 start.go:246] waiting for cluster config update ...
	I0828 10:08:44.494761    2561 start.go:255] writing updated cluster config ...
	I0828 10:08:44.495997    2561 ssh_runner.go:195] Run: rm -f paused
	I0828 10:08:44.557603    2561 start.go:600] kubectl: 1.29.2, cluster: 1.31.0 (minor skew: 2)
	I0828 10:08:44.561738    2561 out.go:201] 
	W0828 10:08:44.565793    2561 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0.
	I0828 10:08:44.568680    2561 out.go:177]   - Want kubectl v1.31.0? Try 'minikube kubectl -- get pods -A'
	I0828 10:08:44.575664    2561 out.go:177] * Done! kubectl is now configured to use "functional-429000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 28 17:09:23 functional-429000 dockerd[5670]: time="2024-08-28T17:09:23.217464710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 28 17:09:23 functional-429000 dockerd[5670]: time="2024-08-28T17:09:23.217470502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 28 17:09:23 functional-429000 dockerd[5670]: time="2024-08-28T17:09:23.217497919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 28 17:09:23 functional-429000 dockerd[5663]: time="2024-08-28T17:09:23.254544420Z" level=info msg="ignoring event" container=01e32ab77b91dbc777695120e97935216a6ccf5c5690e2d20f0d5103fe94eec9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:09:23 functional-429000 dockerd[5670]: time="2024-08-28T17:09:23.254738044Z" level=info msg="shim disconnected" id=01e32ab77b91dbc777695120e97935216a6ccf5c5690e2d20f0d5103fe94eec9 namespace=moby
	Aug 28 17:09:23 functional-429000 dockerd[5670]: time="2024-08-28T17:09:23.254817043Z" level=warning msg="cleaning up after shim disconnected" id=01e32ab77b91dbc777695120e97935216a6ccf5c5690e2d20f0d5103fe94eec9 namespace=moby
	Aug 28 17:09:23 functional-429000 dockerd[5670]: time="2024-08-28T17:09:23.254835460Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 28 17:09:30 functional-429000 dockerd[5670]: time="2024-08-28T17:09:30.989713279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 28 17:09:30 functional-429000 dockerd[5670]: time="2024-08-28T17:09:30.989767987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 28 17:09:30 functional-429000 dockerd[5670]: time="2024-08-28T17:09:30.989932819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 28 17:09:30 functional-429000 dockerd[5670]: time="2024-08-28T17:09:30.989966527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 28 17:09:31 functional-429000 cri-dockerd[5918]: time="2024-08-28T17:09:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b71ae668bb642e2203037bbf11a9c95dcbf4ef58ace19ab3911257049bd8a2d3/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 28 17:09:34 functional-429000 cri-dockerd[5918]: time="2024-08-28T17:09:34Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Aug 28 17:09:34 functional-429000 dockerd[5670]: time="2024-08-28T17:09:34.357431770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 28 17:09:34 functional-429000 dockerd[5670]: time="2024-08-28T17:09:34.357492894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 28 17:09:34 functional-429000 dockerd[5670]: time="2024-08-28T17:09:34.357656102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 28 17:09:34 functional-429000 dockerd[5670]: time="2024-08-28T17:09:34.357734643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 28 17:09:34 functional-429000 dockerd[5663]: time="2024-08-28T17:09:34.390676198Z" level=info msg="ignoring event" container=881def8d1d47e8a7be4429a626faf2227f24bdadfb8fff08c0061487051fa727 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:09:34 functional-429000 dockerd[5670]: time="2024-08-28T17:09:34.390753572Z" level=info msg="shim disconnected" id=881def8d1d47e8a7be4429a626faf2227f24bdadfb8fff08c0061487051fa727 namespace=moby
	Aug 28 17:09:34 functional-429000 dockerd[5670]: time="2024-08-28T17:09:34.390781697Z" level=warning msg="cleaning up after shim disconnected" id=881def8d1d47e8a7be4429a626faf2227f24bdadfb8fff08c0061487051fa727 namespace=moby
	Aug 28 17:09:34 functional-429000 dockerd[5670]: time="2024-08-28T17:09:34.390786614Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 28 17:09:36 functional-429000 dockerd[5663]: time="2024-08-28T17:09:36.263882460Z" level=info msg="ignoring event" container=b71ae668bb642e2203037bbf11a9c95dcbf4ef58ace19ab3911257049bd8a2d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:09:36 functional-429000 dockerd[5670]: time="2024-08-28T17:09:36.263976918Z" level=info msg="shim disconnected" id=b71ae668bb642e2203037bbf11a9c95dcbf4ef58ace19ab3911257049bd8a2d3 namespace=moby
	Aug 28 17:09:36 functional-429000 dockerd[5670]: time="2024-08-28T17:09:36.264043042Z" level=warning msg="cleaning up after shim disconnected" id=b71ae668bb642e2203037bbf11a9c95dcbf4ef58ace19ab3911257049bd8a2d3 namespace=moby
	Aug 28 17:09:36 functional-429000 dockerd[5670]: time="2024-08-28T17:09:36.264047959Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	881def8d1d47e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 seconds ago        Exited              mount-munger              0                   b71ae668bb642       busybox-mount
	095a99aafe8dd       nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add                         16 seconds ago       Running             myfrontend                0                   ef715e8ed5250       sp-pod
	01e32ab77b91d       72565bf5bbedf                                                                                         16 seconds ago       Exited              echoserver-arm            2                   150ee60cf9aea       hello-node-connect-65d86f57f4-h4nz2
	a2f0fcb614a40       72565bf5bbedf                                                                                         27 seconds ago       Exited              echoserver-arm            2                   2c26fcad10f0f       hello-node-64b4f8f9ff-www5b
	5e733f4f28580       nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158                         40 seconds ago       Running             nginx                     0                   bf6eedd2397ff       nginx-svc
	2fc954210e073       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   f0b4eac6c76eb       coredns-6f6b679f8f-n77h6
	705f5703a1c7d       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   7cf624cc18c13       storage-provisioner
	d6121b6f41716       71d55d66fd4ee                                                                                         About a minute ago   Running             kube-proxy                2                   e03f5ba6c17ef       kube-proxy-gxcrt
	f313bfbcf5d76       fcb0683e6bdbd                                                                                         About a minute ago   Running             kube-controller-manager   2                   85cf693f32107       kube-controller-manager-functional-429000
	644430f166f60       fbbbd428abb4d                                                                                         About a minute ago   Running             kube-scheduler            2                   a757606f1ca89       kube-scheduler-functional-429000
	795c2b453262b       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   592f35a9d330e       etcd-functional-429000
	7a7a7bac2c2f0       cd0f0ae0ec9e0                                                                                         About a minute ago   Running             kube-apiserver            0                   a65cb1ca1a558       kube-apiserver-functional-429000
	cf8a00eee2880       2437cf7621777                                                                                         About a minute ago   Exited              coredns                   1                   0ee0d4b869039       coredns-6f6b679f8f-n77h6
	e51cb418737a7       71d55d66fd4ee                                                                                         About a minute ago   Exited              kube-proxy                1                   bfe4ab520b8a5       kube-proxy-gxcrt
	e1e32638b9137       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       1                   ebceef4d128ec       storage-provisioner
	f42e94279c852       fcb0683e6bdbd                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   5b483efc37915       kube-controller-manager-functional-429000
	bd858d02900a9       27e3830e14027                                                                                         2 minutes ago        Exited              etcd                      1                   b69e5f19371f2       etcd-functional-429000
	cd130765c2c6d       fbbbd428abb4d                                                                                         2 minutes ago        Exited              kube-scheduler            1                   2c47e4bb72999       kube-scheduler-functional-429000
	
	
	==> coredns [2fc954210e07] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57630 - 9373 "HINFO IN 6414260523601406005.2062475696677032573. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009566231s
	[INFO] 10.244.0.1:2146 - 28656 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000114041s
	[INFO] 10.244.0.1:39308 - 28744 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000084917s
	[INFO] 10.244.0.1:47957 - 60914 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.000871748s
	[INFO] 10.244.0.1:32726 - 20651 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000087708s
	[INFO] 10.244.0.1:11024 - 62204 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000063916s
	[INFO] 10.244.0.1:37989 - 31676 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000070833s
	
	
	==> coredns [cf8a00eee288] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54730 - 24876 "HINFO IN 8729357298403756534.3719860101935248259. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01009072s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-429000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-429000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=functional-429000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_28T10_07_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 17:07:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-429000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:09:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:09:30 +0000   Wed, 28 Aug 2024 17:07:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:09:30 +0000   Wed, 28 Aug 2024 17:07:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:09:30 +0000   Wed, 28 Aug 2024 17:07:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:09:30 +0000   Wed, 28 Aug 2024 17:07:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-429000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 3cd18308fd15437ebc622097ff4aaf20
	  System UUID:                3cd18308fd15437ebc622097ff4aaf20
	  Boot ID:                    d303b013-2dc3-4162-8eb8-a5916afc8bc8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-www5b                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  default                     hello-node-connect-65d86f57f4-h4nz2          0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         18s
	  kube-system                 coredns-6f6b679f8f-n77h6                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m27s
	  kube-system                 etcd-functional-429000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m32s
	  kube-system                 kube-apiserver-functional-429000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 kube-controller-manager-functional-429000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-proxy-gxcrt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-scheduler-functional-429000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m26s                  kube-proxy       
	  Normal  Starting                 69s                    kube-proxy       
	  Normal  Starting                 116s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  2m36s (x8 over 2m36s)  kubelet          Node functional-429000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m36s (x8 over 2m36s)  kubelet          Node functional-429000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m36s (x7 over 2m36s)  kubelet          Node functional-429000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m32s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m32s                  kubelet          Node functional-429000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m32s                  kubelet          Node functional-429000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m32s                  kubelet          Node functional-429000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m29s                  kubelet          Node functional-429000 status is now: NodeReady
	  Normal  RegisteredNode           2m28s                  node-controller  Node functional-429000 event: Registered Node functional-429000 in Controller
	  Normal  Starting                 2m1s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m1s (x8 over 2m1s)    kubelet          Node functional-429000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s (x8 over 2m1s)    kubelet          Node functional-429000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s (x7 over 2m1s)    kubelet          Node functional-429000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m1s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           114s                   node-controller  Node functional-429000 event: Registered Node functional-429000 in Controller
	  Normal  NodeHasNoDiskPressure    73s (x8 over 73s)      kubelet          Node functional-429000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  73s (x8 over 73s)      kubelet          Node functional-429000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 73s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     73s (x7 over 73s)      kubelet          Node functional-429000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  73s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           67s                    node-controller  Node functional-429000 event: Registered Node functional-429000 in Controller
	
	
	==> dmesg <==
	[ +15.110302] systemd-fstab-generator[4746]: Ignoring "noauto" option for root device
	[  +0.057342] kauditd_printk_skb: 35 callbacks suppressed
	[Aug28 17:08] systemd-fstab-generator[5179]: Ignoring "noauto" option for root device
	[  +0.054039] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.105765] systemd-fstab-generator[5213]: Ignoring "noauto" option for root device
	[  +0.107809] systemd-fstab-generator[5225]: Ignoring "noauto" option for root device
	[  +0.112845] systemd-fstab-generator[5253]: Ignoring "noauto" option for root device
	[  +5.121689] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.334055] systemd-fstab-generator[5871]: Ignoring "noauto" option for root device
	[  +0.086608] systemd-fstab-generator[5883]: Ignoring "noauto" option for root device
	[  +0.084810] systemd-fstab-generator[5895]: Ignoring "noauto" option for root device
	[  +0.100088] systemd-fstab-generator[5910]: Ignoring "noauto" option for root device
	[  +0.228935] systemd-fstab-generator[6077]: Ignoring "noauto" option for root device
	[  +0.935107] systemd-fstab-generator[6197]: Ignoring "noauto" option for root device
	[  +4.413097] kauditd_printk_skb: 199 callbacks suppressed
	[  +9.789672] kauditd_printk_skb: 33 callbacks suppressed
	[  +0.722662] systemd-fstab-generator[7229]: Ignoring "noauto" option for root device
	[  +5.011800] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.241222] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.102638] kauditd_printk_skb: 20 callbacks suppressed
	[Aug28 17:09] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.920357] kauditd_printk_skb: 32 callbacks suppressed
	[  +8.394506] kauditd_printk_skb: 1 callbacks suppressed
	[ +10.122287] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.191592] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [795c2b453262] <==
	{"level":"info","ts":"2024-08-28T17:08:26.988216Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T17:08:26.988250Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T17:08:26.988714Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-28T17:08:26.988806Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-28T17:08:26.988815Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-28T17:08:26.988855Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-28T17:08:26.988862Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-28T17:08:26.988017Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-28T17:08:26.989325Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-28T17:08:28.736435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-28T17:08:28.736586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-28T17:08:28.736673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-28T17:08:28.736709Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-08-28T17:08:28.736776Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-28T17:08:28.736831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-08-28T17:08:28.736875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-28T17:08:28.741421Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T17:08:28.741941Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T17:08:28.742517Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-28T17:08:28.742602Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-28T17:08:28.741434Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-429000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-28T17:08:28.744223Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T17:08:28.744223Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T17:08:28.746169Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-28T17:08:28.746568Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> etcd [bd858d02900a] <==
	{"level":"info","ts":"2024-08-28T17:07:41.155908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-28T17:07:41.155979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-08-28T17:07:41.156161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-08-28T17:07:41.156303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-28T17:07:41.156444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-08-28T17:07:41.156670Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-28T17:07:41.159092Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T17:07:41.159150Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T17:07:41.160338Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-28T17:07:41.160431Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-28T17:07:41.159108Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-429000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-28T17:07:41.162051Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T17:07:41.162051Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T17:07:41.164508Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-28T17:07:41.166059Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-28T17:08:12.257168Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-28T17:08:12.257193Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-429000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-08-28T17:08:12.257229Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-28T17:08:12.257279Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-28T17:08:12.274562Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-28T17:08:12.274596Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-28T17:08:12.274616Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-08-28T17:08:12.276526Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-28T17:08:12.276560Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-28T17:08:12.276564Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-429000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 17:09:40 up 2 min,  0 users,  load average: 1.11, 1.06, 0.46
	Linux functional-429000 5.10.207 #1 SMP PREEMPT Tue Aug 27 17:57:16 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7a7a7bac2c2f] <==
	I0828 17:08:29.348091       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0828 17:08:29.348116       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0828 17:08:29.348125       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0828 17:08:29.347987       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0828 17:08:29.348111       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0828 17:08:29.347993       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0828 17:08:29.350556       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0828 17:08:29.350601       1 aggregator.go:171] initial CRD sync complete...
	I0828 17:08:29.350608       1 autoregister_controller.go:144] Starting autoregister controller
	I0828 17:08:29.350611       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0828 17:08:29.350612       1 cache.go:39] Caches are synced for autoregister controller
	I0828 17:08:29.350800       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0828 17:08:30.250873       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0828 17:08:30.805123       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0828 17:08:30.808870       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0828 17:08:30.819466       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0828 17:08:30.826535       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0828 17:08:30.828436       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0828 17:08:32.926832       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0828 17:08:33.024846       1 controller.go:615] quota admission added evaluator for: endpoints
	I0828 17:08:46.001945       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.160.152"}
	I0828 17:08:50.874676       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0828 17:08:50.918091       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.169.102"}
	I0828 17:08:55.122295       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.102.88.214"}
	I0828 17:09:05.561464       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.62.67"}
	
	
	==> kube-controller-manager [f313bfbcf5d7] <==
	I0828 17:08:32.978614       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="103.541µs"
	I0828 17:08:33.227514       1 shared_informer.go:320] Caches are synced for garbage collector
	I0828 17:08:33.227682       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0828 17:08:33.233299       1 shared_informer.go:320] Caches are synced for garbage collector
	I0828 17:08:40.266811       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="7.505824ms"
	I0828 17:08:40.267721       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="33.458µs"
	I0828 17:08:50.886362       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="10.328713ms"
	I0828 17:08:50.891558       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="5.165773ms"
	I0828 17:08:50.891769       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="43.375µs"
	I0828 17:08:50.893519       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="26.625µs"
	I0828 17:08:56.508340       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="19.916µs"
	I0828 17:08:57.548806       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="38.917µs"
	I0828 17:08:58.552857       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="31.042µs"
	I0828 17:08:59.692304       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-429000"
	I0828 17:09:05.526136       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="6.331022ms"
	I0828 17:09:05.530765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="4.471569ms"
	I0828 17:09:05.530913       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="29.625µs"
	I0828 17:09:06.695375       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="43.25µs"
	I0828 17:09:07.690515       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="26.583µs"
	I0828 17:09:12.747303       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="31.625µs"
	I0828 17:09:23.181664       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="44.667µs"
	I0828 17:09:23.978579       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="43.083µs"
	I0828 17:09:26.197706       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="76.333µs"
	I0828 17:09:30.249076       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-429000"
	I0828 17:09:35.183132       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="42.208µs"
	
	
	==> kube-controller-manager [f42e94279c85] <==
	I0828 17:07:45.058928       1 shared_informer.go:320] Caches are synced for taint
	I0828 17:07:45.059048       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0828 17:07:45.059123       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-429000"
	I0828 17:07:45.059253       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0828 17:07:45.111594       1 shared_informer.go:320] Caches are synced for GC
	I0828 17:07:45.111809       1 shared_informer.go:320] Caches are synced for TTL
	I0828 17:07:45.111850       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0828 17:07:45.111859       1 shared_informer.go:320] Caches are synced for daemon sets
	I0828 17:07:45.111865       1 shared_informer.go:320] Caches are synced for node
	I0828 17:07:45.113521       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0828 17:07:45.113561       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0828 17:07:45.113568       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0828 17:07:45.113819       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0828 17:07:45.113906       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-429000"
	I0828 17:07:45.155111       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0828 17:07:45.204390       1 shared_informer.go:320] Caches are synced for attach detach
	I0828 17:07:45.204404       1 shared_informer.go:320] Caches are synced for PV protection
	I0828 17:07:45.210072       1 shared_informer.go:320] Caches are synced for persistent volume
	I0828 17:07:45.227874       1 shared_informer.go:320] Caches are synced for resource quota
	I0828 17:07:45.258116       1 shared_informer.go:320] Caches are synced for resource quota
	I0828 17:07:45.663441       1 shared_informer.go:320] Caches are synced for garbage collector
	I0828 17:07:45.663509       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0828 17:07:45.667657       1 shared_informer.go:320] Caches are synced for garbage collector
	I0828 17:07:46.481132       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="31.995581ms"
	I0828 17:07:46.481377       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="48.417µs"
	
	
	==> kube-proxy [d6121b6f4171] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0828 17:08:30.744235       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0828 17:08:30.747534       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0828 17:08:30.747560       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 17:08:30.756097       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0828 17:08:30.756113       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0828 17:08:30.756126       1 server_linux.go:169] "Using iptables Proxier"
	I0828 17:08:30.759743       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 17:08:30.759889       1 server.go:483] "Version info" version="v1.31.0"
	I0828 17:08:30.759959       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 17:08:30.760496       1 config.go:197] "Starting service config controller"
	I0828 17:08:30.760527       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 17:08:30.760553       1 config.go:104] "Starting endpoint slice config controller"
	I0828 17:08:30.760568       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 17:08:30.761115       1 config.go:326] "Starting node config controller"
	I0828 17:08:30.764299       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 17:08:30.861983       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0828 17:08:30.861983       1 shared_informer.go:320] Caches are synced for service config
	I0828 17:08:30.864449       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e51cb418737a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0828 17:07:43.220507       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0828 17:07:43.231661       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0828 17:07:43.231689       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 17:07:43.243185       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0828 17:07:43.243204       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0828 17:07:43.243225       1 server_linux.go:169] "Using iptables Proxier"
	I0828 17:07:43.244117       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 17:07:43.244276       1 server.go:483] "Version info" version="v1.31.0"
	I0828 17:07:43.244282       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 17:07:43.244860       1 config.go:197] "Starting service config controller"
	I0828 17:07:43.244904       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 17:07:43.244915       1 config.go:104] "Starting endpoint slice config controller"
	I0828 17:07:43.244917       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 17:07:43.245129       1 config.go:326] "Starting node config controller"
	I0828 17:07:43.245132       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 17:07:43.345289       1 shared_informer.go:320] Caches are synced for service config
	I0828 17:07:43.345289       1 shared_informer.go:320] Caches are synced for node config
	I0828 17:07:43.345314       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [644430f166f6] <==
	I0828 17:08:27.309124       1 serving.go:386] Generated self-signed cert in-memory
	W0828 17:08:29.272306       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0828 17:08:29.272353       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0828 17:08:29.272363       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0828 17:08:29.272371       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0828 17:08:29.325405       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0828 17:08:29.325720       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 17:08:29.327029       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0828 17:08:29.327075       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0828 17:08:29.327822       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0828 17:08:29.327085       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0828 17:08:29.429175       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cd130765c2c6] <==
	I0828 17:07:39.801670       1 serving.go:386] Generated self-signed cert in-memory
	W0828 17:07:41.691384       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0828 17:07:41.691418       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0828 17:07:41.691425       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0828 17:07:41.691429       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0828 17:07:41.720902       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0828 17:07:41.720927       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 17:07:41.721942       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0828 17:07:41.722142       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0828 17:07:41.722153       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0828 17:07:41.722188       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0828 17:07:41.823956       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0828 17:08:12.247355       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 28 17:09:22 functional-429000 kubelet[6204]: I0828 17:09:22.178482    6204 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50e5084b-0ba5-47ec-acf1-a28c3f093897" path="/var/lib/kubelet/pods/50e5084b-0ba5-47ec-acf1-a28c3f093897/volumes"
	Aug 28 17:09:23 functional-429000 kubelet[6204]: I0828 17:09:23.169607    6204 scope.go:117] "RemoveContainer" containerID="d82295bcc22d005da5094a90834b593defc6e5fe66a3a8e068fbccd3ab7b1596"
	Aug 28 17:09:23 functional-429000 kubelet[6204]: I0828 17:09:23.967046    6204 scope.go:117] "RemoveContainer" containerID="d82295bcc22d005da5094a90834b593defc6e5fe66a3a8e068fbccd3ab7b1596"
	Aug 28 17:09:23 functional-429000 kubelet[6204]: I0828 17:09:23.967574    6204 scope.go:117] "RemoveContainer" containerID="01e32ab77b91dbc777695120e97935216a6ccf5c5690e2d20f0d5103fe94eec9"
	Aug 28 17:09:23 functional-429000 kubelet[6204]: E0828 17:09:23.967846    6204 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-h4nz2_default(66d5b42c-f19b-4b1e-8554-9f400ec16142)\"" pod="default/hello-node-connect-65d86f57f4-h4nz2" podUID="66d5b42c-f19b-4b1e-8554-9f400ec16142"
	Aug 28 17:09:26 functional-429000 kubelet[6204]: I0828 17:09:26.172434    6204 scope.go:117] "RemoveContainer" containerID="a2f0fcb614a40bd47e201e87d9e515ac4aab74830e32f1930c192a233d1f9f9a"
	Aug 28 17:09:26 functional-429000 kubelet[6204]: E0828 17:09:26.172727    6204 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-www5b_default(50d55603-c843-4cb6-a1de-603012f70725)\"" pod="default/hello-node-64b4f8f9ff-www5b" podUID="50d55603-c843-4cb6-a1de-603012f70725"
	Aug 28 17:09:26 functional-429000 kubelet[6204]: E0828 17:09:26.189556    6204 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 28 17:09:26 functional-429000 kubelet[6204]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 28 17:09:26 functional-429000 kubelet[6204]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 28 17:09:26 functional-429000 kubelet[6204]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 28 17:09:26 functional-429000 kubelet[6204]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 28 17:09:26 functional-429000 kubelet[6204]: I0828 17:09:26.197309    6204 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=4.415524162 podStartE2EDuration="5.197291556s" podCreationTimestamp="2024-08-28 17:09:21 +0000 UTC" firstStartedPulling="2024-08-28 17:09:22.387426961 +0000 UTC m=+56.272550862" lastFinishedPulling="2024-08-28 17:09:23.169194355 +0000 UTC m=+57.054318256" observedRunningTime="2024-08-28 17:09:23.990536891 +0000 UTC m=+57.875660792" watchObservedRunningTime="2024-08-28 17:09:26.197291556 +0000 UTC m=+60.082415499"
	Aug 28 17:09:26 functional-429000 kubelet[6204]: I0828 17:09:26.270721    6204 scope.go:117] "RemoveContainer" containerID="720fb6aa7a92c88517d6ec229ade70ad8ad1fb1f49b6117f65474b1519553082"
	Aug 28 17:09:30 functional-429000 kubelet[6204]: I0828 17:09:30.820598    6204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdp5b\" (UniqueName: \"kubernetes.io/projected/4fa8f109-116d-447f-b5e3-d7d25c9f0103-kube-api-access-zdp5b\") pod \"busybox-mount\" (UID: \"4fa8f109-116d-447f-b5e3-d7d25c9f0103\") " pod="default/busybox-mount"
	Aug 28 17:09:30 functional-429000 kubelet[6204]: I0828 17:09:30.820622    6204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/4fa8f109-116d-447f-b5e3-d7d25c9f0103-test-volume\") pod \"busybox-mount\" (UID: \"4fa8f109-116d-447f-b5e3-d7d25c9f0103\") " pod="default/busybox-mount"
	Aug 28 17:09:35 functional-429000 kubelet[6204]: I0828 17:09:35.170369    6204 scope.go:117] "RemoveContainer" containerID="01e32ab77b91dbc777695120e97935216a6ccf5c5690e2d20f0d5103fe94eec9"
	Aug 28 17:09:35 functional-429000 kubelet[6204]: E0828 17:09:35.170696    6204 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-h4nz2_default(66d5b42c-f19b-4b1e-8554-9f400ec16142)\"" pod="default/hello-node-connect-65d86f57f4-h4nz2" podUID="66d5b42c-f19b-4b1e-8554-9f400ec16142"
	Aug 28 17:09:36 functional-429000 kubelet[6204]: I0828 17:09:36.471594    6204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdp5b\" (UniqueName: \"kubernetes.io/projected/4fa8f109-116d-447f-b5e3-d7d25c9f0103-kube-api-access-zdp5b\") pod \"4fa8f109-116d-447f-b5e3-d7d25c9f0103\" (UID: \"4fa8f109-116d-447f-b5e3-d7d25c9f0103\") "
	Aug 28 17:09:36 functional-429000 kubelet[6204]: I0828 17:09:36.471784    6204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/4fa8f109-116d-447f-b5e3-d7d25c9f0103-test-volume\") pod \"4fa8f109-116d-447f-b5e3-d7d25c9f0103\" (UID: \"4fa8f109-116d-447f-b5e3-d7d25c9f0103\") "
	Aug 28 17:09:36 functional-429000 kubelet[6204]: I0828 17:09:36.471835    6204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fa8f109-116d-447f-b5e3-d7d25c9f0103-test-volume" (OuterVolumeSpecName: "test-volume") pod "4fa8f109-116d-447f-b5e3-d7d25c9f0103" (UID: "4fa8f109-116d-447f-b5e3-d7d25c9f0103"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 28 17:09:36 functional-429000 kubelet[6204]: I0828 17:09:36.472518    6204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fa8f109-116d-447f-b5e3-d7d25c9f0103-kube-api-access-zdp5b" (OuterVolumeSpecName: "kube-api-access-zdp5b") pod "4fa8f109-116d-447f-b5e3-d7d25c9f0103" (UID: "4fa8f109-116d-447f-b5e3-d7d25c9f0103"). InnerVolumeSpecName "kube-api-access-zdp5b". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 28 17:09:36 functional-429000 kubelet[6204]: I0828 17:09:36.572871    6204 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zdp5b\" (UniqueName: \"kubernetes.io/projected/4fa8f109-116d-447f-b5e3-d7d25c9f0103-kube-api-access-zdp5b\") on node \"functional-429000\" DevicePath \"\""
	Aug 28 17:09:36 functional-429000 kubelet[6204]: I0828 17:09:36.572898    6204 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/4fa8f109-116d-447f-b5e3-d7d25c9f0103-test-volume\") on node \"functional-429000\" DevicePath \"\""
	Aug 28 17:09:37 functional-429000 kubelet[6204]: I0828 17:09:37.187873    6204 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b71ae668bb642e2203037bbf11a9c95dcbf4ef58ace19ab3911257049bd8a2d3"
	
	
	==> storage-provisioner [705f5703a1c7] <==
	I0828 17:08:30.716895       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0828 17:08:30.731673       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0828 17:08:30.731691       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0828 17:08:48.145599       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0828 17:08:48.146129       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-429000_d0887b9b-4a4f-4fa3-922b-1634955e895c!
	I0828 17:08:48.147182       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0a476f30-41f4-4034-91c1-fb5ae084f9d4", APIVersion:"v1", ResourceVersion:"642", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-429000_d0887b9b-4a4f-4fa3-922b-1634955e895c became leader
	I0828 17:08:48.246923       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-429000_d0887b9b-4a4f-4fa3-922b-1634955e895c!
	I0828 17:09:07.573047       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0828 17:09:07.573079       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    bda73add-9b03-40fc-9463-7b733d0efd3b 341 0 2024-08-28 17:07:12 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-08-28 17:07:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-3ede3b1b-fb8b-478f-883b-480534b059da &PersistentVolumeClaim{ObjectMeta:{myclaim  default  3ede3b1b-fb8b-478f-883b-480534b059da 750 0 2024-08-28 17:09:07 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-08-28 17:09:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-08-28 17:09:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0828 17:09:07.573652       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-3ede3b1b-fb8b-478f-883b-480534b059da" provisioned
	I0828 17:09:07.573687       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0828 17:09:07.573716       1 volume_store.go:212] Trying to save persistentvolume "pvc-3ede3b1b-fb8b-478f-883b-480534b059da"
	I0828 17:09:07.574244       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"3ede3b1b-fb8b-478f-883b-480534b059da", APIVersion:"v1", ResourceVersion:"750", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0828 17:09:07.578839       1 volume_store.go:219] persistentvolume "pvc-3ede3b1b-fb8b-478f-883b-480534b059da" saved
	I0828 17:09:07.579041       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"3ede3b1b-fb8b-478f-883b-480534b059da", APIVersion:"v1", ResourceVersion:"750", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-3ede3b1b-fb8b-478f-883b-480534b059da
	
	
	==> storage-provisioner [e1e32638b913] <==
	I0828 17:07:43.141213       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0828 17:07:43.146155       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0828 17:07:43.146179       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0828 17:07:43.154757       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0828 17:07:43.154852       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-429000_f9b794ad-619a-4c9c-9258-a5b261d0abde!
	I0828 17:07:43.155312       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0a476f30-41f4-4034-91c1-fb5ae084f9d4", APIVersion:"v1", ResourceVersion:"430", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-429000_f9b794ad-619a-4c9c-9258-a5b261d0abde became leader
	I0828 17:07:43.255369       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-429000_f9b794ad-619a-4c9c-9258-a5b261d0abde!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-429000 -n functional-429000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-429000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-429000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-429000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-429000/192.168.105.4
	Start Time:       Wed, 28 Aug 2024 10:09:30 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://881def8d1d47e8a7be4429a626faf2227f24bdadfb8fff08c0061487051fa727
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 28 Aug 2024 10:09:34 -0700
	      Finished:     Wed, 28 Aug 2024 10:09:34 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zdp5b (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zdp5b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  9s    default-scheduler  Successfully assigned default/busybox-mount to functional-429000
	  Normal  Pulling    9s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.263s (3.263s including waiting). Image size: 3547125 bytes.
	  Normal  Created    6s    kubelet            Created container mount-munger
	  Normal  Started    6s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (35.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 node stop m02 -v=7 --alsologtostderr
E0828 10:14:31.894528    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-092000 node stop m02 -v=7 --alsologtostderr: (12.197192083s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 status -v=7 --alsologtostderr
E0828 10:14:38.061788    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:15:12.855157    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:16:34.775276    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-092000 status -v=7 --alsologtostderr: exit status 7 (2m55.985075166s)

                                                
                                                
-- stdout --
	ha-092000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-092000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-092000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-092000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:14:34.754260    3186 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:14:34.754433    3186 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:14:34.754436    3186 out.go:358] Setting ErrFile to fd 2...
	I0828 10:14:34.754439    3186 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:14:34.754572    3186 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:14:34.754728    3186 out.go:352] Setting JSON to false
	I0828 10:14:34.754740    3186 mustload.go:65] Loading cluster: ha-092000
	I0828 10:14:34.754776    3186 notify.go:220] Checking for updates...
	I0828 10:14:34.755005    3186 config.go:182] Loaded profile config "ha-092000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:14:34.755015    3186 status.go:255] checking status of ha-092000 ...
	I0828 10:14:34.755786    3186 status.go:330] ha-092000 host status = "Running" (err=<nil>)
	I0828 10:14:34.755794    3186 host.go:66] Checking if "ha-092000" exists ...
	I0828 10:14:34.755913    3186 host.go:66] Checking if "ha-092000" exists ...
	I0828 10:14:34.756041    3186 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 10:14:34.756050    3186 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000/id_rsa Username:docker}
	W0828 10:15:00.678776    3186 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0828 10:15:00.678918    3186 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0828 10:15:00.678939    3186 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0828 10:15:00.678948    3186 status.go:257] ha-092000 status: &{Name:ha-092000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0828 10:15:00.678969    3186 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0828 10:15:00.678980    3186 status.go:255] checking status of ha-092000-m02 ...
	I0828 10:15:00.679427    3186 status.go:330] ha-092000-m02 host status = "Stopped" (err=<nil>)
	I0828 10:15:00.679438    3186 status.go:343] host is not running, skipping remaining checks
	I0828 10:15:00.679443    3186 status.go:257] ha-092000-m02 status: &{Name:ha-092000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 10:15:00.679455    3186 status.go:255] checking status of ha-092000-m03 ...
	I0828 10:15:00.680668    3186 status.go:330] ha-092000-m03 host status = "Running" (err=<nil>)
	I0828 10:15:00.680679    3186 host.go:66] Checking if "ha-092000-m03" exists ...
	I0828 10:15:00.680893    3186 host.go:66] Checking if "ha-092000-m03" exists ...
	I0828 10:15:00.681147    3186 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 10:15:00.681161    3186 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000-m03/id_rsa Username:docker}
	W0828 10:16:15.681915    3186 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0828 10:16:15.681955    3186 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0828 10:16:15.681979    3186 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0828 10:16:15.681984    3186 status.go:257] ha-092000-m03 status: &{Name:ha-092000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0828 10:16:15.681993    3186 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0828 10:16:15.681997    3186 status.go:255] checking status of ha-092000-m04 ...
	I0828 10:16:15.682663    3186 status.go:330] ha-092000-m04 host status = "Running" (err=<nil>)
	I0828 10:16:15.682672    3186 host.go:66] Checking if "ha-092000-m04" exists ...
	I0828 10:16:15.682769    3186 host.go:66] Checking if "ha-092000-m04" exists ...
	I0828 10:16:15.682888    3186 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 10:16:15.682896    3186 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000-m04/id_rsa Username:docker}
	W0828 10:17:30.683276    3186 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0828 10:17:30.683323    3186 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0828 10:17:30.683331    3186 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0828 10:17:30.683334    3186 status.go:257] ha-092000-m04 status: &{Name:ha-092000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0828 10:17:30.683343    3186 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-092000 status -v=7 --alsologtostderr": ha-092000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-092000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-092000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-092000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-092000 status -v=7 --alsologtostderr": ha-092000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-092000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-092000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-092000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-092000 status -v=7 --alsologtostderr": ha-092000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-092000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-092000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-092000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-092000 -n ha-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-092000 -n ha-092000: exit status 3 (25.96485425s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 10:17:56.647911    3213 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0828 10:17:56.647919    3213 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-092000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (103.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0828 10:18:50.892218    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:19:10.328803    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m17.843436959s)
ha_test.go:413: expected profile "ha-092000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-092000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-092000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-092000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-092000 -n ha-092000
E0828 10:19:18.615335    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-092000 -n ha-092000: exit status 3 (25.963065208s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 10:19:40.449092    3243 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0828 10:19:40.449139    3243 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-092000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (103.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (208.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-092000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.131264875s)

                                                
                                                
-- stdout --
	* Starting "ha-092000-m02" control-plane node in "ha-092000" cluster
	* Restarting existing qemu2 VM for "ha-092000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-092000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:19:40.519169    3253 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:19:40.519448    3253 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:19:40.519452    3253 out.go:358] Setting ErrFile to fd 2...
	I0828 10:19:40.519455    3253 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:19:40.519604    3253 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:19:40.519910    3253 mustload.go:65] Loading cluster: ha-092000
	I0828 10:19:40.520186    3253 config.go:182] Loaded profile config "ha-092000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0828 10:19:40.520465    3253 host.go:58] "ha-092000-m02" host status: Stopped
	I0828 10:19:40.524968    3253 out.go:177] * Starting "ha-092000-m02" control-plane node in "ha-092000" cluster
	I0828 10:19:40.528957    3253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:19:40.528978    3253 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:19:40.528987    3253 cache.go:56] Caching tarball of preloaded images
	I0828 10:19:40.529101    3253 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:19:40.529108    3253 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:19:40.529180    3253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/ha-092000/config.json ...
	I0828 10:19:40.530271    3253 start.go:360] acquireMachinesLock for ha-092000-m02: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:19:40.530324    3253 start.go:364] duration metric: took 37.416µs to acquireMachinesLock for "ha-092000-m02"
	I0828 10:19:40.530337    3253 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:19:40.530343    3253 fix.go:54] fixHost starting: m02
	I0828 10:19:40.530513    3253 fix.go:112] recreateIfNeeded on ha-092000-m02: state=Stopped err=<nil>
	W0828 10:19:40.530520    3253 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:19:40.534955    3253 out.go:177] * Restarting existing qemu2 VM for "ha-092000-m02" ...
	I0828 10:19:40.537922    3253 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:19:40.537975    3253 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:76:10:5d:ec:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000-m02/disk.qcow2
	I0828 10:19:40.541461    3253 main.go:141] libmachine: STDOUT: 
	I0828 10:19:40.541489    3253 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:19:40.541521    3253 fix.go:56] duration metric: took 11.177541ms for fixHost
	I0828 10:19:40.541525    3253 start.go:83] releasing machines lock for "ha-092000-m02", held for 11.195583ms
	W0828 10:19:40.541536    3253 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:19:40.541579    3253 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:19:40.541584    3253 start.go:729] Will try again in 5 seconds ...
	I0828 10:19:45.543761    3253 start.go:360] acquireMachinesLock for ha-092000-m02: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:19:45.544196    3253 start.go:364] duration metric: took 340.75µs to acquireMachinesLock for "ha-092000-m02"
	I0828 10:19:45.544326    3253 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:19:45.544338    3253 fix.go:54] fixHost starting: m02
	I0828 10:19:45.544840    3253 fix.go:112] recreateIfNeeded on ha-092000-m02: state=Stopped err=<nil>
	W0828 10:19:45.544856    3253 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:19:45.548574    3253 out.go:177] * Restarting existing qemu2 VM for "ha-092000-m02" ...
	I0828 10:19:45.552568    3253 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:19:45.552709    3253 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:76:10:5d:ec:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000-m02/disk.qcow2
	I0828 10:19:45.559786    3253 main.go:141] libmachine: STDOUT: 
	I0828 10:19:45.559834    3253 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:19:45.559908    3253 fix.go:56] duration metric: took 15.569333ms for fixHost
	I0828 10:19:45.559921    3253 start.go:83] releasing machines lock for "ha-092000-m02", held for 15.708584ms
	W0828 10:19:45.560083    3253 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-092000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-092000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:19:45.564490    3253 out.go:201] 
	W0828 10:19:45.568648    3253 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:19:45.568668    3253 out.go:270] * 
	* 
	W0828 10:19:45.574782    3253 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:19:45.579587    3253 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0828 10:19:40.519169    3253 out.go:345] Setting OutFile to fd 1 ...
I0828 10:19:40.519448    3253 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 10:19:40.519452    3253 out.go:358] Setting ErrFile to fd 2...
I0828 10:19:40.519455    3253 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 10:19:40.519604    3253 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
I0828 10:19:40.519910    3253 mustload.go:65] Loading cluster: ha-092000
I0828 10:19:40.520186    3253 config.go:182] Loaded profile config "ha-092000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
W0828 10:19:40.520465    3253 host.go:58] "ha-092000-m02" host status: Stopped
I0828 10:19:40.524968    3253 out.go:177] * Starting "ha-092000-m02" control-plane node in "ha-092000" cluster
I0828 10:19:40.528957    3253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0828 10:19:40.528978    3253 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0828 10:19:40.528987    3253 cache.go:56] Caching tarball of preloaded images
I0828 10:19:40.529101    3253 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0828 10:19:40.529108    3253 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0828 10:19:40.529180    3253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/ha-092000/config.json ...
I0828 10:19:40.530271    3253 start.go:360] acquireMachinesLock for ha-092000-m02: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0828 10:19:40.530324    3253 start.go:364] duration metric: took 37.416µs to acquireMachinesLock for "ha-092000-m02"
I0828 10:19:40.530337    3253 start.go:96] Skipping create...Using existing machine configuration
I0828 10:19:40.530343    3253 fix.go:54] fixHost starting: m02
I0828 10:19:40.530513    3253 fix.go:112] recreateIfNeeded on ha-092000-m02: state=Stopped err=<nil>
W0828 10:19:40.530520    3253 fix.go:138] unexpected machine state, will restart: <nil>
I0828 10:19:40.534955    3253 out.go:177] * Restarting existing qemu2 VM for "ha-092000-m02" ...
I0828 10:19:40.537922    3253 qemu.go:418] Using hvf for hardware acceleration
I0828 10:19:40.537975    3253 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:76:10:5d:ec:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000-m02/disk.qcow2
I0828 10:19:40.541461    3253 main.go:141] libmachine: STDOUT: 
I0828 10:19:40.541489    3253 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0828 10:19:40.541521    3253 fix.go:56] duration metric: took 11.177541ms for fixHost
I0828 10:19:40.541525    3253 start.go:83] releasing machines lock for "ha-092000-m02", held for 11.195583ms
W0828 10:19:40.541536    3253 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0828 10:19:40.541579    3253 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0828 10:19:40.541584    3253 start.go:729] Will try again in 5 seconds ...
I0828 10:19:45.543761    3253 start.go:360] acquireMachinesLock for ha-092000-m02: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0828 10:19:45.544196    3253 start.go:364] duration metric: took 340.75µs to acquireMachinesLock for "ha-092000-m02"
I0828 10:19:45.544326    3253 start.go:96] Skipping create...Using existing machine configuration
I0828 10:19:45.544338    3253 fix.go:54] fixHost starting: m02
I0828 10:19:45.544840    3253 fix.go:112] recreateIfNeeded on ha-092000-m02: state=Stopped err=<nil>
W0828 10:19:45.544856    3253 fix.go:138] unexpected machine state, will restart: <nil>
I0828 10:19:45.548574    3253 out.go:177] * Restarting existing qemu2 VM for "ha-092000-m02" ...
I0828 10:19:45.552568    3253 qemu.go:418] Using hvf for hardware acceleration
I0828 10:19:45.552709    3253 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:76:10:5d:ec:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000-m02/disk.qcow2
I0828 10:19:45.559786    3253 main.go:141] libmachine: STDOUT: 
I0828 10:19:45.559834    3253 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0828 10:19:45.559908    3253 fix.go:56] duration metric: took 15.569333ms for fixHost
I0828 10:19:45.559921    3253 start.go:83] releasing machines lock for "ha-092000-m02", held for 15.708584ms
W0828 10:19:45.560083    3253 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-092000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-092000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0828 10:19:45.564490    3253 out.go:201] 
W0828 10:19:45.568648    3253 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0828 10:19:45.568668    3253 out.go:270] * 
* 
W0828 10:19:45.574782    3253 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0828 10:19:45.579587    3253 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-092000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-092000 status -v=7 --alsologtostderr: exit status 7 (2m57.31702875s)

                                                
                                                
-- stdout --
	ha-092000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-092000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-092000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-092000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:19:45.641910    3262 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:19:45.642120    3262 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:19:45.642125    3262 out.go:358] Setting ErrFile to fd 2...
	I0828 10:19:45.642128    3262 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:19:45.642297    3262 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:19:45.642461    3262 out.go:352] Setting JSON to false
	I0828 10:19:45.642475    3262 mustload.go:65] Loading cluster: ha-092000
	I0828 10:19:45.642513    3262 notify.go:220] Checking for updates...
	I0828 10:19:45.642734    3262 config.go:182] Loaded profile config "ha-092000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:19:45.642742    3262 status.go:255] checking status of ha-092000 ...
	I0828 10:19:45.643605    3262 status.go:330] ha-092000 host status = "Running" (err=<nil>)
	I0828 10:19:45.643613    3262 host.go:66] Checking if "ha-092000" exists ...
	I0828 10:19:45.643739    3262 host.go:66] Checking if "ha-092000" exists ...
	I0828 10:19:45.643874    3262 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 10:19:45.643883    3262 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000/id_rsa Username:docker}
	W0828 10:19:45.644093    3262 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0828 10:19:45.644114    3262 retry.go:31] will retry after 367.10561ms: dial tcp 192.168.105.5:22: connect: host is down
	W0828 10:19:46.013403    3262 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0828 10:19:46.013425    3262 retry.go:31] will retry after 480.05291ms: dial tcp 192.168.105.5:22: connect: host is down
	W0828 10:19:46.494477    3262 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0828 10:19:46.494527    3262 retry.go:31] will retry after 474.594241ms: dial tcp 192.168.105.5:22: connect: host is down
	W0828 10:20:12.890767    3262 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0828 10:20:12.890854    3262 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0828 10:20:12.890869    3262 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0828 10:20:12.890873    3262 status.go:257] ha-092000 status: &{Name:ha-092000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0828 10:20:12.890891    3262 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0828 10:20:12.890895    3262 status.go:255] checking status of ha-092000-m02 ...
	I0828 10:20:12.891122    3262 status.go:330] ha-092000-m02 host status = "Stopped" (err=<nil>)
	I0828 10:20:12.891128    3262 status.go:343] host is not running, skipping remaining checks
	I0828 10:20:12.891130    3262 status.go:257] ha-092000-m02 status: &{Name:ha-092000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 10:20:12.891134    3262 status.go:255] checking status of ha-092000-m03 ...
	I0828 10:20:12.891919    3262 status.go:330] ha-092000-m03 host status = "Running" (err=<nil>)
	I0828 10:20:12.891929    3262 host.go:66] Checking if "ha-092000-m03" exists ...
	I0828 10:20:12.892044    3262 host.go:66] Checking if "ha-092000-m03" exists ...
	I0828 10:20:12.892167    3262 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 10:20:12.892175    3262 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000-m03/id_rsa Username:docker}
	W0828 10:21:27.893291    3262 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0828 10:21:27.893499    3262 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0828 10:21:27.893588    3262 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0828 10:21:27.893625    3262 status.go:257] ha-092000-m03 status: &{Name:ha-092000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0828 10:21:27.893672    3262 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0828 10:21:27.893693    3262 status.go:255] checking status of ha-092000-m04 ...
	I0828 10:21:27.896648    3262 status.go:330] ha-092000-m04 host status = "Running" (err=<nil>)
	I0828 10:21:27.896672    3262 host.go:66] Checking if "ha-092000-m04" exists ...
	I0828 10:21:27.897138    3262 host.go:66] Checking if "ha-092000-m04" exists ...
	I0828 10:21:27.897683    3262 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 10:21:27.897709    3262 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000-m04/id_rsa Username:docker}
	W0828 10:22:42.898366    3262 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0828 10:22:42.898415    3262 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0828 10:22:42.898424    3262 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0828 10:22:42.898428    3262 status.go:257] ha-092000-m04 status: &{Name:ha-092000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0828 10:22:42.898437    3262 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-092000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-092000 -n ha-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-092000 -n ha-092000: exit status 3 (25.961521s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 10:23:08.859678    3282 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0828 10:23:08.859688    3282 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-092000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (208.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-092000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-092000 -v=7 --alsologtostderr
E0828 10:25:33.411767    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-092000 -v=7 --alsologtostderr: (3m49.004768416s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-092000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-092000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.234965s)

                                                
                                                
-- stdout --
	* [ha-092000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-092000" primary control-plane node in "ha-092000" cluster
	* Restarting existing qemu2 VM for "ha-092000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-092000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:28:17.234497    3383 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:28:17.234691    3383 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:28:17.234695    3383 out.go:358] Setting ErrFile to fd 2...
	I0828 10:28:17.234699    3383 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:28:17.234870    3383 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:28:17.236112    3383 out.go:352] Setting JSON to false
	I0828 10:28:17.256489    3383 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3460,"bootTime":1724862637,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:28:17.256580    3383 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:28:17.262216    3383 out.go:177] * [ha-092000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:28:17.269316    3383 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:28:17.269372    3383 notify.go:220] Checking for updates...
	I0828 10:28:17.277374    3383 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:28:17.281198    3383 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:28:17.284235    3383 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:28:17.290150    3383 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:28:17.294240    3383 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:28:17.297522    3383 config.go:182] Loaded profile config "ha-092000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:28:17.297575    3383 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:28:17.302207    3383 out.go:177] * Using the qemu2 driver based on existing profile
	I0828 10:28:17.310047    3383 start.go:297] selected driver: qemu2
	I0828 10:28:17.310054    3383 start.go:901] validating driver "qemu2" against &{Name:ha-092000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-092000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:28:17.310133    3383 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:28:17.313171    3383 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:28:17.313214    3383 cni.go:84] Creating CNI manager for ""
	I0828 10:28:17.313220    3383 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0828 10:28:17.313272    3383 start.go:340] cluster config:
	{Name:ha-092000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-092000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:28:17.317962    3383 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:28:17.326195    3383 out.go:177] * Starting "ha-092000" primary control-plane node in "ha-092000" cluster
	I0828 10:28:17.330123    3383 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:28:17.330145    3383 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:28:17.330157    3383 cache.go:56] Caching tarball of preloaded images
	I0828 10:28:17.330230    3383 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:28:17.330236    3383 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:28:17.330325    3383 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/ha-092000/config.json ...
	I0828 10:28:17.330769    3383 start.go:360] acquireMachinesLock for ha-092000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:28:17.330804    3383 start.go:364] duration metric: took 28.708µs to acquireMachinesLock for "ha-092000"
	I0828 10:28:17.330815    3383 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:28:17.330820    3383 fix.go:54] fixHost starting: 
	I0828 10:28:17.330945    3383 fix.go:112] recreateIfNeeded on ha-092000: state=Stopped err=<nil>
	W0828 10:28:17.330954    3383 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:28:17.334213    3383 out.go:177] * Restarting existing qemu2 VM for "ha-092000" ...
	I0828 10:28:17.342243    3383 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:28:17.342285    3383 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:bc:4b:9b:81:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000/disk.qcow2
	I0828 10:28:17.344227    3383 main.go:141] libmachine: STDOUT: 
	I0828 10:28:17.344249    3383 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:28:17.344276    3383 fix.go:56] duration metric: took 13.456792ms for fixHost
	I0828 10:28:17.344280    3383 start.go:83] releasing machines lock for "ha-092000", held for 13.471875ms
	W0828 10:28:17.344286    3383 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:28:17.344317    3383 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:28:17.344322    3383 start.go:729] Will try again in 5 seconds ...
	I0828 10:28:22.346431    3383 start.go:360] acquireMachinesLock for ha-092000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:28:22.346834    3383 start.go:364] duration metric: took 317.583µs to acquireMachinesLock for "ha-092000"
	I0828 10:28:22.346951    3383 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:28:22.346970    3383 fix.go:54] fixHost starting: 
	I0828 10:28:22.347630    3383 fix.go:112] recreateIfNeeded on ha-092000: state=Stopped err=<nil>
	W0828 10:28:22.347655    3383 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:28:22.351194    3383 out.go:177] * Restarting existing qemu2 VM for "ha-092000" ...
	I0828 10:28:22.358968    3383 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:28:22.359208    3383 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:bc:4b:9b:81:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000/disk.qcow2
	I0828 10:28:22.367815    3383 main.go:141] libmachine: STDOUT: 
	I0828 10:28:22.367895    3383 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:28:22.368009    3383 fix.go:56] duration metric: took 21.036833ms for fixHost
	I0828 10:28:22.368030    3383 start.go:83] releasing machines lock for "ha-092000", held for 21.176625ms
	W0828 10:28:22.368262    3383 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-092000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-092000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:28:22.377028    3383 out.go:201] 
	W0828 10:28:22.381040    3383 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:28:22.381098    3383 out.go:270] * 
	* 
	W0828 10:28:22.383917    3383 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:28:22.390062    3383 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-092000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-092000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-092000 -n ha-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-092000 -n ha-092000: exit status 7 (33.503125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-092000 node delete m03 -v=7 --alsologtostderr: exit status 83 (41.720333ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-092000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-092000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:28:22.535482    3396 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:28:22.535711    3396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:28:22.535714    3396 out.go:358] Setting ErrFile to fd 2...
	I0828 10:28:22.535716    3396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:28:22.535867    3396 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:28:22.536115    3396 mustload.go:65] Loading cluster: ha-092000
	I0828 10:28:22.536323    3396 config.go:182] Loaded profile config "ha-092000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0828 10:28:22.536661    3396 out.go:270] ! The control-plane node ha-092000 host is not running (will try others): state=Stopped
	! The control-plane node ha-092000 host is not running (will try others): state=Stopped
	W0828 10:28:22.536774    3396 out.go:270] ! The control-plane node ha-092000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-092000-m02 host is not running (will try others): state=Stopped
	I0828 10:28:22.541226    3396 out.go:177] * The control-plane node ha-092000-m03 host is not running: state=Stopped
	I0828 10:28:22.544242    3396 out.go:177]   To start a cluster, run: "minikube start -p ha-092000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-092000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-092000 status -v=7 --alsologtostderr: exit status 7 (31.904167ms)

                                                
                                                
-- stdout --
	ha-092000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-092000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-092000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-092000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:28:22.577798    3398 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:28:22.577975    3398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:28:22.577981    3398 out.go:358] Setting ErrFile to fd 2...
	I0828 10:28:22.577984    3398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:28:22.578135    3398 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:28:22.578252    3398 out.go:352] Setting JSON to false
	I0828 10:28:22.578263    3398 mustload.go:65] Loading cluster: ha-092000
	I0828 10:28:22.578324    3398 notify.go:220] Checking for updates...
	I0828 10:28:22.578480    3398 config.go:182] Loaded profile config "ha-092000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:28:22.578486    3398 status.go:255] checking status of ha-092000 ...
	I0828 10:28:22.578699    3398 status.go:330] ha-092000 host status = "Stopped" (err=<nil>)
	I0828 10:28:22.578703    3398 status.go:343] host is not running, skipping remaining checks
	I0828 10:28:22.578705    3398 status.go:257] ha-092000 status: &{Name:ha-092000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 10:28:22.578715    3398 status.go:255] checking status of ha-092000-m02 ...
	I0828 10:28:22.578802    3398 status.go:330] ha-092000-m02 host status = "Stopped" (err=<nil>)
	I0828 10:28:22.578804    3398 status.go:343] host is not running, skipping remaining checks
	I0828 10:28:22.578806    3398 status.go:257] ha-092000-m02 status: &{Name:ha-092000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 10:28:22.578810    3398 status.go:255] checking status of ha-092000-m03 ...
	I0828 10:28:22.578894    3398 status.go:330] ha-092000-m03 host status = "Stopped" (err=<nil>)
	I0828 10:28:22.578896    3398 status.go:343] host is not running, skipping remaining checks
	I0828 10:28:22.578898    3398 status.go:257] ha-092000-m03 status: &{Name:ha-092000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 10:28:22.578901    3398 status.go:255] checking status of ha-092000-m04 ...
	I0828 10:28:22.578996    3398 status.go:330] ha-092000-m04 host status = "Stopped" (err=<nil>)
	I0828 10:28:22.578998    3398 status.go:343] host is not running, skipping remaining checks
	I0828 10:28:22.579000    3398 status.go:257] ha-092000-m04 status: &{Name:ha-092000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-092000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-092000 -n ha-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-092000 -n ha-092000: exit status 7 (31.243458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-092000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-092000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-092000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-092000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-092000 -n ha-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-092000 -n ha-092000: exit status 7 (51.820042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 stop -v=7 --alsologtostderr
E0828 10:28:50.880002    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:29:10.251700    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:30:13.899765    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-092000 stop -v=7 --alsologtostderr: (3m21.985198083s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-092000 status -v=7 --alsologtostderr: exit status 7 (72.541208ms)

                                                
                                                
-- stdout --
	ha-092000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-092000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-092000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-092000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:31:45.632843    3781 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:31:45.633886    3781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:31:45.633891    3781 out.go:358] Setting ErrFile to fd 2...
	I0828 10:31:45.633894    3781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:31:45.634088    3781 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:31:45.634252    3781 out.go:352] Setting JSON to false
	I0828 10:31:45.634274    3781 mustload.go:65] Loading cluster: ha-092000
	I0828 10:31:45.634316    3781 notify.go:220] Checking for updates...
	I0828 10:31:45.634538    3781 config.go:182] Loaded profile config "ha-092000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:31:45.634545    3781 status.go:255] checking status of ha-092000 ...
	I0828 10:31:45.634827    3781 status.go:330] ha-092000 host status = "Stopped" (err=<nil>)
	I0828 10:31:45.634831    3781 status.go:343] host is not running, skipping remaining checks
	I0828 10:31:45.634834    3781 status.go:257] ha-092000 status: &{Name:ha-092000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 10:31:45.634846    3781 status.go:255] checking status of ha-092000-m02 ...
	I0828 10:31:45.634970    3781 status.go:330] ha-092000-m02 host status = "Stopped" (err=<nil>)
	I0828 10:31:45.634973    3781 status.go:343] host is not running, skipping remaining checks
	I0828 10:31:45.634976    3781 status.go:257] ha-092000-m02 status: &{Name:ha-092000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 10:31:45.634984    3781 status.go:255] checking status of ha-092000-m03 ...
	I0828 10:31:45.635099    3781 status.go:330] ha-092000-m03 host status = "Stopped" (err=<nil>)
	I0828 10:31:45.635102    3781 status.go:343] host is not running, skipping remaining checks
	I0828 10:31:45.635104    3781 status.go:257] ha-092000-m03 status: &{Name:ha-092000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 10:31:45.635109    3781 status.go:255] checking status of ha-092000-m04 ...
	I0828 10:31:45.635219    3781 status.go:330] ha-092000-m04 host status = "Stopped" (err=<nil>)
	I0828 10:31:45.635222    3781 status.go:343] host is not running, skipping remaining checks
	I0828 10:31:45.635225    3781 status.go:257] ha-092000-m04 status: &{Name:ha-092000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-092000 status -v=7 --alsologtostderr": ha-092000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-092000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-092000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-092000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-092000 status -v=7 --alsologtostderr": ha-092000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-092000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-092000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-092000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-092000 status -v=7 --alsologtostderr": ha-092000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-092000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-092000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-092000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-092000 -n ha-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-092000 -n ha-092000: exit status 7 (33.1285ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-092000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-092000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.1820875s)

                                                
                                                
-- stdout --
	* [ha-092000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-092000" primary control-plane node in "ha-092000" cluster
	* Restarting existing qemu2 VM for "ha-092000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-092000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:31:45.698415    3785 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:31:45.698540    3785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:31:45.698544    3785 out.go:358] Setting ErrFile to fd 2...
	I0828 10:31:45.698546    3785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:31:45.698675    3785 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:31:45.699742    3785 out.go:352] Setting JSON to false
	I0828 10:31:45.715980    3785 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3669,"bootTime":1724862636,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:31:45.716046    3785 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:31:45.721036    3785 out.go:177] * [ha-092000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:31:45.729171    3785 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:31:45.729221    3785 notify.go:220] Checking for updates...
	I0828 10:31:45.737139    3785 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:31:45.740186    3785 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:31:45.744062    3785 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:31:45.747192    3785 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:31:45.750157    3785 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:31:45.753382    3785 config.go:182] Loaded profile config "ha-092000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:31:45.753641    3785 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:31:45.758153    3785 out.go:177] * Using the qemu2 driver based on existing profile
	I0828 10:31:45.765111    3785 start.go:297] selected driver: qemu2
	I0828 10:31:45.765117    3785 start.go:901] validating driver "qemu2" against &{Name:ha-092000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-092000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:31:45.765186    3785 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:31:45.767509    3785 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:31:45.767555    3785 cni.go:84] Creating CNI manager for ""
	I0828 10:31:45.767561    3785 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0828 10:31:45.767608    3785 start.go:340] cluster config:
	{Name:ha-092000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-092000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:31:45.771160    3785 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:31:45.779161    3785 out.go:177] * Starting "ha-092000" primary control-plane node in "ha-092000" cluster
	I0828 10:31:45.783143    3785 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:31:45.783156    3785 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:31:45.783161    3785 cache.go:56] Caching tarball of preloaded images
	I0828 10:31:45.783221    3785 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:31:45.783227    3785 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:31:45.783293    3785 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/ha-092000/config.json ...
	I0828 10:31:45.783725    3785 start.go:360] acquireMachinesLock for ha-092000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:31:45.783762    3785 start.go:364] duration metric: took 30.458µs to acquireMachinesLock for "ha-092000"
	I0828 10:31:45.783774    3785 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:31:45.783781    3785 fix.go:54] fixHost starting: 
	I0828 10:31:45.783903    3785 fix.go:112] recreateIfNeeded on ha-092000: state=Stopped err=<nil>
	W0828 10:31:45.783911    3785 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:31:45.788091    3785 out.go:177] * Restarting existing qemu2 VM for "ha-092000" ...
	I0828 10:31:45.796109    3785 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:31:45.796145    3785 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:bc:4b:9b:81:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000/disk.qcow2
	I0828 10:31:45.798173    3785 main.go:141] libmachine: STDOUT: 
	I0828 10:31:45.798195    3785 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:31:45.798225    3785 fix.go:56] duration metric: took 14.446208ms for fixHost
	I0828 10:31:45.798231    3785 start.go:83] releasing machines lock for "ha-092000", held for 14.4645ms
	W0828 10:31:45.798237    3785 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:31:45.798268    3785 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:31:45.798272    3785 start.go:729] Will try again in 5 seconds ...
	I0828 10:31:50.800234    3785 start.go:360] acquireMachinesLock for ha-092000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:31:50.800586    3785 start.go:364] duration metric: took 281.25µs to acquireMachinesLock for "ha-092000"
	I0828 10:31:50.800701    3785 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:31:50.800719    3785 fix.go:54] fixHost starting: 
	I0828 10:31:50.801420    3785 fix.go:112] recreateIfNeeded on ha-092000: state=Stopped err=<nil>
	W0828 10:31:50.801445    3785 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:31:50.808811    3785 out.go:177] * Restarting existing qemu2 VM for "ha-092000" ...
	I0828 10:31:50.811793    3785 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:31:50.811967    3785 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:bc:4b:9b:81:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/ha-092000/disk.qcow2
	I0828 10:31:50.820892    3785 main.go:141] libmachine: STDOUT: 
	I0828 10:31:50.820971    3785 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:31:50.821055    3785 fix.go:56] duration metric: took 20.336667ms for fixHost
	I0828 10:31:50.821071    3785 start.go:83] releasing machines lock for "ha-092000", held for 20.467875ms
	W0828 10:31:50.821214    3785 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-092000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-092000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:31:50.829777    3785 out.go:201] 
	W0828 10:31:50.833947    3785 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:31:50.833994    3785 out.go:270] * 
	* 
	W0828 10:31:50.836395    3785 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:31:50.843798    3785 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-092000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-092000 -n ha-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-092000 -n ha-092000: exit status 7 (62.627375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-092000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-092000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-092000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-092000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-092000 -n ha-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-092000 -n ha-092000: exit status 7 (30.776042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-092000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-092000 --control-plane -v=7 --alsologtostderr: exit status 83 (43.628875ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-092000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-092000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:31:51.025028    3800 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:31:51.025206    3800 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:31:51.025209    3800 out.go:358] Setting ErrFile to fd 2...
	I0828 10:31:51.025211    3800 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:31:51.025347    3800 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:31:51.025590    3800 mustload.go:65] Loading cluster: ha-092000
	I0828 10:31:51.025805    3800 config.go:182] Loaded profile config "ha-092000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0828 10:31:51.026116    3800 out.go:270] ! The control-plane node ha-092000 host is not running (will try others): state=Stopped
	! The control-plane node ha-092000 host is not running (will try others): state=Stopped
	W0828 10:31:51.026217    3800 out.go:270] ! The control-plane node ha-092000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-092000-m02 host is not running (will try others): state=Stopped
	I0828 10:31:51.030496    3800 out.go:177] * The control-plane node ha-092000-m03 host is not running: state=Stopped
	I0828 10:31:51.034357    3800 out.go:177]   To start a cluster, run: "minikube start -p ha-092000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-092000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-092000 -n ha-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-092000 -n ha-092000: exit status 7 (31.354875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.26s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-479000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-479000 --driver=qemu2 : exit status 80 (10.188168791s)

                                                
                                                
-- stdout --
	* [image-479000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-479000" primary control-plane node in "image-479000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-479000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-479000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-479000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-479000 -n image-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-479000 -n image-479000: exit status 7 (68.865542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-479000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.26s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.89s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-940000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-940000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.893007583s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"56a9a68b-4a24-4adc-b1de-2fda6c72a814","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-940000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4ae6a048-123a-4ac1-88a5-32a2617c00b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19529"}}
	{"specversion":"1.0","id":"100a9e9e-2fc9-475c-b952-ba0462caeb5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig"}}
	{"specversion":"1.0","id":"d38b031c-0b23-4968-ae03-735767d6280e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"25f49e6f-37d0-4ddd-9323-aaa93ae01583","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"197c5b4a-41ab-4100-9144-554465a85cc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube"}}
	{"specversion":"1.0","id":"eabea3e9-a8f0-4eff-a618-87a2bc234673","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"56f233bb-0c21-4b00-aba5-b6d99cb0cd94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8e233513-cdfd-420a-ac91-8c1c6a5bc7f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"3cc8bf21-e15e-45aa-80ca-eb3321aaaf6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-940000\" primary control-plane node in \"json-output-940000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"900e1506-7dd5-457d-bc7c-46ce3bac0ec6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"1c907027-0ca3-48e0-99cd-03ce178dfadb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-940000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"d33abc56-d14a-4924-aa91-78b4b3aa1f9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"5252e207-0f84-4937-9923-023acebb1233","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"4a942a90-cfb8-4aac-842b-bb2ffe3051ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-940000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"71e4d09d-edcf-434f-81fc-5cfcaaf07131","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"2373dc5b-beae-461a-8b8a-54cd0efedfd1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-940000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.89s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-940000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-940000 --output=json --user=testUser: exit status 83 (78.635583ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1e333835-6572-4548-9c97-b5b218c562b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-940000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"de29b878-dc82-436f-8ab7-f2b6efdc7bf6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-940000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-940000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-940000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-940000 --output=json --user=testUser: exit status 83 (46.16ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-940000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-940000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-940000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-940000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.06s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-503000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-503000 --driver=qemu2 : exit status 80 (9.763000333s)

                                                
                                                
-- stdout --
	* [first-503000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-503000" primary control-plane node in "first-503000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-503000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-503000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-503000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-28 10:32:24.102741 -0700 PDT m=+2516.014443834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-504000 -n second-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-504000 -n second-504000: exit status 85 (80.857583ms)

                                                
                                                
-- stdout --
	* Profile "second-504000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-504000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-504000" host is not running, skipping log retrieval (state="* Profile \"second-504000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-504000\"")
helpers_test.go:175: Cleaning up "second-504000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-504000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-28 10:32:24.290823 -0700 PDT m=+2516.202532626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-503000 -n first-503000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-503000 -n first-503000: exit status 7 (31.22725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-503000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-503000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-503000
--- FAIL: TestMinikubeProfile (10.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-879000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-879000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.933744959s)

                                                
                                                
-- stdout --
	* [mount-start-1-879000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-879000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-879000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-879000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-879000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-879000 -n mount-start-1-879000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-879000 -n mount-start-1-879000: exit status 7 (72.224667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-879000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-223000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-223000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.815694417s)

                                                
                                                
-- stdout --
	* [multinode-223000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-223000" primary control-plane node in "multinode-223000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-223000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:32:34.621490    3935 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:32:34.621629    3935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:32:34.621632    3935 out.go:358] Setting ErrFile to fd 2...
	I0828 10:32:34.621635    3935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:32:34.621780    3935 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:32:34.622814    3935 out.go:352] Setting JSON to false
	I0828 10:32:34.638930    3935 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3718,"bootTime":1724862636,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:32:34.638996    3935 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:32:34.646251    3935 out.go:177] * [multinode-223000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:32:34.654306    3935 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:32:34.654369    3935 notify.go:220] Checking for updates...
	I0828 10:32:34.662193    3935 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:32:34.665247    3935 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:32:34.668257    3935 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:32:34.671169    3935 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:32:34.674285    3935 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:32:34.677406    3935 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:32:34.680198    3935 out.go:177] * Using the qemu2 driver based on user configuration
	I0828 10:32:34.687270    3935 start.go:297] selected driver: qemu2
	I0828 10:32:34.687276    3935 start.go:901] validating driver "qemu2" against <nil>
	I0828 10:32:34.687287    3935 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:32:34.689443    3935 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 10:32:34.690680    3935 out.go:177] * Automatically selected the socket_vmnet network
	I0828 10:32:34.693328    3935 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:32:34.693349    3935 cni.go:84] Creating CNI manager for ""
	I0828 10:32:34.693353    3935 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0828 10:32:34.693358    3935 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0828 10:32:34.693383    3935 start.go:340] cluster config:
	{Name:multinode-223000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-223000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:32:34.696828    3935 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:32:34.705329    3935 out.go:177] * Starting "multinode-223000" primary control-plane node in "multinode-223000" cluster
	I0828 10:32:34.709274    3935 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:32:34.709287    3935 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:32:34.709293    3935 cache.go:56] Caching tarball of preloaded images
	I0828 10:32:34.709342    3935 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:32:34.709348    3935 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:32:34.709554    3935 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/multinode-223000/config.json ...
	I0828 10:32:34.709569    3935 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/multinode-223000/config.json: {Name:mk4810b8feb1da044f485505f38742eb096dc4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:32:34.709788    3935 start.go:360] acquireMachinesLock for multinode-223000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:32:34.709825    3935 start.go:364] duration metric: took 30.791µs to acquireMachinesLock for "multinode-223000"
	I0828 10:32:34.709839    3935 start.go:93] Provisioning new machine with config: &{Name:multinode-223000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-223000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:32:34.709866    3935 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:32:34.718242    3935 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 10:32:34.735191    3935 start.go:159] libmachine.API.Create for "multinode-223000" (driver="qemu2")
	I0828 10:32:34.735225    3935 client.go:168] LocalClient.Create starting
	I0828 10:32:34.735288    3935 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:32:34.735317    3935 main.go:141] libmachine: Decoding PEM data...
	I0828 10:32:34.735325    3935 main.go:141] libmachine: Parsing certificate...
	I0828 10:32:34.735363    3935 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:32:34.735385    3935 main.go:141] libmachine: Decoding PEM data...
	I0828 10:32:34.735395    3935 main.go:141] libmachine: Parsing certificate...
	I0828 10:32:34.735787    3935 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:32:34.915484    3935 main.go:141] libmachine: Creating SSH key...
	I0828 10:32:34.996518    3935 main.go:141] libmachine: Creating Disk image...
	I0828 10:32:34.996523    3935 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:32:34.996704    3935 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/disk.qcow2
	I0828 10:32:35.006031    3935 main.go:141] libmachine: STDOUT: 
	I0828 10:32:35.006050    3935 main.go:141] libmachine: STDERR: 
	I0828 10:32:35.006114    3935 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/disk.qcow2 +20000M
	I0828 10:32:35.014038    3935 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:32:35.014059    3935 main.go:141] libmachine: STDERR: 
	I0828 10:32:35.014071    3935 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/disk.qcow2
	I0828 10:32:35.014075    3935 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:32:35.014083    3935 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:32:35.014122    3935 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:52:27:b6:16:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/disk.qcow2
	I0828 10:32:35.015748    3935 main.go:141] libmachine: STDOUT: 
	I0828 10:32:35.015762    3935 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:32:35.015781    3935 client.go:171] duration metric: took 280.560625ms to LocalClient.Create
	I0828 10:32:37.018086    3935 start.go:128] duration metric: took 2.308260833s to createHost
	I0828 10:32:37.018164    3935 start.go:83] releasing machines lock for "multinode-223000", held for 2.308413125s
	W0828 10:32:37.018226    3935 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:32:37.032377    3935 out.go:177] * Deleting "multinode-223000" in qemu2 ...
	W0828 10:32:37.063032    3935 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:32:37.063056    3935 start.go:729] Will try again in 5 seconds ...
	I0828 10:32:42.065089    3935 start.go:360] acquireMachinesLock for multinode-223000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:32:42.065503    3935 start.go:364] duration metric: took 330.417µs to acquireMachinesLock for "multinode-223000"
	I0828 10:32:42.065620    3935 start.go:93] Provisioning new machine with config: &{Name:multinode-223000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-223000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:32:42.065896    3935 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:32:42.076418    3935 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 10:32:42.126290    3935 start.go:159] libmachine.API.Create for "multinode-223000" (driver="qemu2")
	I0828 10:32:42.126345    3935 client.go:168] LocalClient.Create starting
	I0828 10:32:42.126450    3935 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:32:42.126504    3935 main.go:141] libmachine: Decoding PEM data...
	I0828 10:32:42.126522    3935 main.go:141] libmachine: Parsing certificate...
	I0828 10:32:42.126585    3935 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:32:42.126628    3935 main.go:141] libmachine: Decoding PEM data...
	I0828 10:32:42.126640    3935 main.go:141] libmachine: Parsing certificate...
	I0828 10:32:42.127231    3935 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:32:42.297055    3935 main.go:141] libmachine: Creating SSH key...
	I0828 10:32:42.339690    3935 main.go:141] libmachine: Creating Disk image...
	I0828 10:32:42.339695    3935 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:32:42.339871    3935 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/disk.qcow2
	I0828 10:32:42.349039    3935 main.go:141] libmachine: STDOUT: 
	I0828 10:32:42.349055    3935 main.go:141] libmachine: STDERR: 
	I0828 10:32:42.349094    3935 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/disk.qcow2 +20000M
	I0828 10:32:42.356885    3935 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:32:42.356901    3935 main.go:141] libmachine: STDERR: 
	I0828 10:32:42.356918    3935 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/disk.qcow2
	I0828 10:32:42.356923    3935 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:32:42.356933    3935 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:32:42.356958    3935 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:f1:89:95:f0:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/disk.qcow2
	I0828 10:32:42.358598    3935 main.go:141] libmachine: STDOUT: 
	I0828 10:32:42.358613    3935 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:32:42.358626    3935 client.go:171] duration metric: took 232.284208ms to LocalClient.Create
	I0828 10:32:44.360727    3935 start.go:128] duration metric: took 2.294882333s to createHost
	I0828 10:32:44.360789    3935 start.go:83] releasing machines lock for "multinode-223000", held for 2.295345417s
	W0828 10:32:44.361216    3935 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-223000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-223000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:32:44.375844    3935 out.go:201] 
	W0828 10:32:44.378836    3935 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:32:44.378865    3935 out.go:270] * 
	* 
	W0828 10:32:44.381268    3935 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:32:44.394878    3935 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-223000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000: exit status 7 (70.420375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-223000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (77.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-223000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-223000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (130.03375ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-223000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-223000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-223000 -- rollout status deployment/busybox: exit status 1 (60.480875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-223000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-223000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-223000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.9125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-223000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-223000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-223000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.615ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-223000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-223000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-223000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.903375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-223000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-223000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-223000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.967458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-223000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-223000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-223000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.361916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-223000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-223000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-223000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.932125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-223000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-223000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-223000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.459667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-223000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-223000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-223000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.580417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-223000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-223000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-223000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.865042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-223000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0828 10:33:50.803616    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-223000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-223000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.98375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-223000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-223000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-223000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.814542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-223000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-223000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-223000 -- exec  -- nslookup kubernetes.io: exit status 1 (58.451541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-223000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-223000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-223000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.5625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-223000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-223000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-223000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (58.832125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-223000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000: exit status 7 (30.886625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-223000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (77.85s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-223000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-223000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.781042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-223000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000: exit status 7 (30.447584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-223000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-223000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-223000 -v 3 --alsologtostderr: exit status 83 (41.799584ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-223000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-223000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:34:02.442056    4021 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:34:02.442209    4021 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:02.442213    4021 out.go:358] Setting ErrFile to fd 2...
	I0828 10:34:02.442215    4021 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:02.442355    4021 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:34:02.442602    4021 mustload.go:65] Loading cluster: multinode-223000
	I0828 10:34:02.442805    4021 config.go:182] Loaded profile config "multinode-223000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:34:02.446997    4021 out.go:177] * The control-plane node multinode-223000 host is not running: state=Stopped
	I0828 10:34:02.449768    4021 out.go:177]   To start a cluster, run: "minikube start -p multinode-223000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-223000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000: exit status 7 (30.691917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-223000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-223000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-223000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (29.806417ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-223000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-223000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-223000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000: exit status 7 (32.057042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-223000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-223000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-223000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-223000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"multinode-223000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000: exit status 7 (30.636375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-223000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-223000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-223000 status --output json --alsologtostderr: exit status 7 (30.876375ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-223000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:34:02.651869    4033 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:34:02.652013    4033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:02.652017    4033 out.go:358] Setting ErrFile to fd 2...
	I0828 10:34:02.652019    4033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:02.652153    4033 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:34:02.652270    4033 out.go:352] Setting JSON to true
	I0828 10:34:02.652283    4033 mustload.go:65] Loading cluster: multinode-223000
	I0828 10:34:02.652345    4033 notify.go:220] Checking for updates...
	I0828 10:34:02.652478    4033 config.go:182] Loaded profile config "multinode-223000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:34:02.652484    4033 status.go:255] checking status of multinode-223000 ...
	I0828 10:34:02.652696    4033 status.go:330] multinode-223000 host status = "Stopped" (err=<nil>)
	I0828 10:34:02.652700    4033 status.go:343] host is not running, skipping remaining checks
	I0828 10:34:02.652702    4033 status.go:257] multinode-223000 status: &{Name:multinode-223000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-223000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000: exit status 7 (30.706292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-223000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-223000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-223000 node stop m03: exit status 85 (47.523542ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-223000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-223000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-223000 status: exit status 7 (31.402042ms)

                                                
                                                
-- stdout --
	multinode-223000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-223000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-223000 status --alsologtostderr: exit status 7 (30.2955ms)

                                                
                                                
-- stdout --
	multinode-223000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:34:02.792829    4041 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:34:02.792967    4041 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:02.792974    4041 out.go:358] Setting ErrFile to fd 2...
	I0828 10:34:02.792977    4041 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:02.793099    4041 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:34:02.793219    4041 out.go:352] Setting JSON to false
	I0828 10:34:02.793229    4041 mustload.go:65] Loading cluster: multinode-223000
	I0828 10:34:02.793289    4041 notify.go:220] Checking for updates...
	I0828 10:34:02.793444    4041 config.go:182] Loaded profile config "multinode-223000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:34:02.793450    4041 status.go:255] checking status of multinode-223000 ...
	I0828 10:34:02.793652    4041 status.go:330] multinode-223000 host status = "Stopped" (err=<nil>)
	I0828 10:34:02.793657    4041 status.go:343] host is not running, skipping remaining checks
	I0828 10:34:02.793659    4041 status.go:257] multinode-223000 status: &{Name:multinode-223000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-223000 status --alsologtostderr": multinode-223000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000: exit status 7 (30.316542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-223000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (49.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-223000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-223000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.598167ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:34:02.854759    4045 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:34:02.854977    4045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:02.854983    4045 out.go:358] Setting ErrFile to fd 2...
	I0828 10:34:02.854986    4045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:02.855133    4045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:34:02.855381    4045 mustload.go:65] Loading cluster: multinode-223000
	I0828 10:34:02.855558    4045 config.go:182] Loaded profile config "multinode-223000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:34:02.858923    4045 out.go:201] 
	W0828 10:34:02.861920    4045 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0828 10:34:02.861931    4045 out.go:270] * 
	* 
	W0828 10:34:02.863516    4045 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:34:02.866795    4045 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0828 10:34:02.854759    4045 out.go:345] Setting OutFile to fd 1 ...
I0828 10:34:02.854977    4045 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 10:34:02.854983    4045 out.go:358] Setting ErrFile to fd 2...
I0828 10:34:02.854986    4045 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 10:34:02.855133    4045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
I0828 10:34:02.855381    4045 mustload.go:65] Loading cluster: multinode-223000
I0828 10:34:02.855558    4045 config.go:182] Loaded profile config "multinode-223000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 10:34:02.858923    4045 out.go:201] 
W0828 10:34:02.861920    4045 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0828 10:34:02.861931    4045 out.go:270] * 
* 
W0828 10:34:02.863516    4045 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0828 10:34:02.866795    4045 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-223000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-223000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-223000 status -v=7 --alsologtostderr: exit status 7 (31.403125ms)

                                                
                                                
-- stdout --
	multinode-223000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:34:02.901626    4047 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:34:02.901769    4047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:02.901772    4047 out.go:358] Setting ErrFile to fd 2...
	I0828 10:34:02.901776    4047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:02.901912    4047 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:34:02.902036    4047 out.go:352] Setting JSON to false
	I0828 10:34:02.902046    4047 mustload.go:65] Loading cluster: multinode-223000
	I0828 10:34:02.902101    4047 notify.go:220] Checking for updates...
	I0828 10:34:02.902221    4047 config.go:182] Loaded profile config "multinode-223000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:34:02.902227    4047 status.go:255] checking status of multinode-223000 ...
	I0828 10:34:02.902434    4047 status.go:330] multinode-223000 host status = "Stopped" (err=<nil>)
	I0828 10:34:02.902438    4047 status.go:343] host is not running, skipping remaining checks
	I0828 10:34:02.902441    4047 status.go:257] multinode-223000 status: &{Name:multinode-223000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-223000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-223000 status -v=7 --alsologtostderr: exit status 7 (73.974667ms)

                                                
                                                
-- stdout --
	multinode-223000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:34:03.779505    4049 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:34:03.779757    4049 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:03.779761    4049 out.go:358] Setting ErrFile to fd 2...
	I0828 10:34:03.779764    4049 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:03.779946    4049 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:34:03.780095    4049 out.go:352] Setting JSON to false
	I0828 10:34:03.780118    4049 mustload.go:65] Loading cluster: multinode-223000
	I0828 10:34:03.780159    4049 notify.go:220] Checking for updates...
	I0828 10:34:03.780383    4049 config.go:182] Loaded profile config "multinode-223000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:34:03.780391    4049 status.go:255] checking status of multinode-223000 ...
	I0828 10:34:03.780657    4049 status.go:330] multinode-223000 host status = "Stopped" (err=<nil>)
	I0828 10:34:03.780662    4049 status.go:343] host is not running, skipping remaining checks
	I0828 10:34:03.780665    4049 status.go:257] multinode-223000 status: &{Name:multinode-223000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-223000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-223000 status -v=7 --alsologtostderr: exit status 7 (73.862041ms)

                                                
                                                
-- stdout --
	multinode-223000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:34:06.042517    4051 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:34:06.042701    4051 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:06.042705    4051 out.go:358] Setting ErrFile to fd 2...
	I0828 10:34:06.042709    4051 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:06.042908    4051 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:34:06.043060    4051 out.go:352] Setting JSON to false
	I0828 10:34:06.043074    4051 mustload.go:65] Loading cluster: multinode-223000
	I0828 10:34:06.043109    4051 notify.go:220] Checking for updates...
	I0828 10:34:06.043347    4051 config.go:182] Loaded profile config "multinode-223000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:34:06.043355    4051 status.go:255] checking status of multinode-223000 ...
	I0828 10:34:06.043634    4051 status.go:330] multinode-223000 host status = "Stopped" (err=<nil>)
	I0828 10:34:06.043639    4051 status.go:343] host is not running, skipping remaining checks
	I0828 10:34:06.043642    4051 status.go:257] multinode-223000 status: &{Name:multinode-223000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-223000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-223000 status -v=7 --alsologtostderr: exit status 7 (76.163542ms)

                                                
                                                
-- stdout --
	multinode-223000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:34:08.446080    4053 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:34:08.446278    4053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:08.446282    4053 out.go:358] Setting ErrFile to fd 2...
	I0828 10:34:08.446285    4053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:08.446452    4053 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:34:08.446614    4053 out.go:352] Setting JSON to false
	I0828 10:34:08.446628    4053 mustload.go:65] Loading cluster: multinode-223000
	I0828 10:34:08.446678    4053 notify.go:220] Checking for updates...
	I0828 10:34:08.446917    4053 config.go:182] Loaded profile config "multinode-223000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:34:08.446927    4053 status.go:255] checking status of multinode-223000 ...
	I0828 10:34:08.447228    4053 status.go:330] multinode-223000 host status = "Stopped" (err=<nil>)
	I0828 10:34:08.447233    4053 status.go:343] host is not running, skipping remaining checks
	I0828 10:34:08.447236    4053 status.go:257] multinode-223000 status: &{Name:multinode-223000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0828 10:34:10.241029    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-223000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-223000 status -v=7 --alsologtostderr: exit status 7 (75.414583ms)

                                                
                                                
-- stdout --
	multinode-223000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:34:10.972671    4055 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:34:10.972868    4055 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:10.972872    4055 out.go:358] Setting ErrFile to fd 2...
	I0828 10:34:10.972875    4055 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:10.973037    4055 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:34:10.973187    4055 out.go:352] Setting JSON to false
	I0828 10:34:10.973201    4055 mustload.go:65] Loading cluster: multinode-223000
	I0828 10:34:10.973244    4055 notify.go:220] Checking for updates...
	I0828 10:34:10.973468    4055 config.go:182] Loaded profile config "multinode-223000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:34:10.973476    4055 status.go:255] checking status of multinode-223000 ...
	I0828 10:34:10.973749    4055 status.go:330] multinode-223000 host status = "Stopped" (err=<nil>)
	I0828 10:34:10.973754    4055 status.go:343] host is not running, skipping remaining checks
	I0828 10:34:10.973757    4055 status.go:257] multinode-223000 status: &{Name:multinode-223000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-223000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-223000 status -v=7 --alsologtostderr: exit status 7 (75.037666ms)

                                                
                                                
-- stdout --
	multinode-223000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:34:14.697190    4057 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:34:14.697376    4057 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:14.697380    4057 out.go:358] Setting ErrFile to fd 2...
	I0828 10:34:14.697384    4057 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:14.697576    4057 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:34:14.697748    4057 out.go:352] Setting JSON to false
	I0828 10:34:14.697761    4057 mustload.go:65] Loading cluster: multinode-223000
	I0828 10:34:14.697786    4057 notify.go:220] Checking for updates...
	I0828 10:34:14.698013    4057 config.go:182] Loaded profile config "multinode-223000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:34:14.698025    4057 status.go:255] checking status of multinode-223000 ...
	I0828 10:34:14.698280    4057 status.go:330] multinode-223000 host status = "Stopped" (err=<nil>)
	I0828 10:34:14.698285    4057 status.go:343] host is not running, skipping remaining checks
	I0828 10:34:14.698288    4057 status.go:257] multinode-223000 status: &{Name:multinode-223000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-223000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-223000 status -v=7 --alsologtostderr: exit status 7 (77.138709ms)

                                                
                                                
-- stdout --
	multinode-223000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:34:20.416625    4059 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:34:20.416858    4059 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:20.416863    4059 out.go:358] Setting ErrFile to fd 2...
	I0828 10:34:20.416867    4059 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:20.417068    4059 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:34:20.417211    4059 out.go:352] Setting JSON to false
	I0828 10:34:20.417225    4059 mustload.go:65] Loading cluster: multinode-223000
	I0828 10:34:20.417264    4059 notify.go:220] Checking for updates...
	I0828 10:34:20.417488    4059 config.go:182] Loaded profile config "multinode-223000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:34:20.417495    4059 status.go:255] checking status of multinode-223000 ...
	I0828 10:34:20.417792    4059 status.go:330] multinode-223000 host status = "Stopped" (err=<nil>)
	I0828 10:34:20.417797    4059 status.go:343] host is not running, skipping remaining checks
	I0828 10:34:20.417799    4059 status.go:257] multinode-223000 status: &{Name:multinode-223000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-223000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-223000 status -v=7 --alsologtostderr: exit status 7 (74.174917ms)

                                                
                                                
-- stdout --
	multinode-223000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:34:28.310868    4061 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:34:28.311027    4061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:28.311031    4061 out.go:358] Setting ErrFile to fd 2...
	I0828 10:34:28.311034    4061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:28.311212    4061 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:34:28.311358    4061 out.go:352] Setting JSON to false
	I0828 10:34:28.311372    4061 mustload.go:65] Loading cluster: multinode-223000
	I0828 10:34:28.311406    4061 notify.go:220] Checking for updates...
	I0828 10:34:28.311619    4061 config.go:182] Loaded profile config "multinode-223000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:34:28.311630    4061 status.go:255] checking status of multinode-223000 ...
	I0828 10:34:28.311898    4061 status.go:330] multinode-223000 host status = "Stopped" (err=<nil>)
	I0828 10:34:28.311903    4061 status.go:343] host is not running, skipping remaining checks
	I0828 10:34:28.311906    4061 status.go:257] multinode-223000 status: &{Name:multinode-223000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-223000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-223000 status -v=7 --alsologtostderr: exit status 7 (77.182667ms)

                                                
                                                
-- stdout --
	multinode-223000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:34:52.736395    4068 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:34:52.736622    4068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:52.736627    4068 out.go:358] Setting ErrFile to fd 2...
	I0828 10:34:52.736630    4068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:52.736801    4068 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:34:52.736967    4068 out.go:352] Setting JSON to false
	I0828 10:34:52.736982    4068 mustload.go:65] Loading cluster: multinode-223000
	I0828 10:34:52.737018    4068 notify.go:220] Checking for updates...
	I0828 10:34:52.737249    4068 config.go:182] Loaded profile config "multinode-223000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:34:52.737258    4068 status.go:255] checking status of multinode-223000 ...
	I0828 10:34:52.737549    4068 status.go:330] multinode-223000 host status = "Stopped" (err=<nil>)
	I0828 10:34:52.737554    4068 status.go:343] host is not running, skipping remaining checks
	I0828 10:34:52.737557    4068 status.go:257] multinode-223000 status: &{Name:multinode-223000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-223000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000: exit status 7 (34.253625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-223000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (49.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-223000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-223000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-223000: (3.579362042s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-223000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-223000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.215527166s)

                                                
                                                
-- stdout --
	* [multinode-223000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-223000" primary control-plane node in "multinode-223000" cluster
	* Restarting existing qemu2 VM for "multinode-223000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-223000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:34:56.447099    4094 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:34:56.447256    4094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:56.447260    4094 out.go:358] Setting ErrFile to fd 2...
	I0828 10:34:56.447263    4094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:34:56.447445    4094 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:34:56.448656    4094 out.go:352] Setting JSON to false
	I0828 10:34:56.467585    4094 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3860,"bootTime":1724862636,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:34:56.467656    4094 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:34:56.471685    4094 out.go:177] * [multinode-223000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:34:56.478689    4094 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:34:56.478755    4094 notify.go:220] Checking for updates...
	I0828 10:34:56.485566    4094 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:34:56.488617    4094 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:34:56.491541    4094 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:34:56.494600    4094 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:34:56.497594    4094 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:34:56.500857    4094 config.go:182] Loaded profile config "multinode-223000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:34:56.500914    4094 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:34:56.505518    4094 out.go:177] * Using the qemu2 driver based on existing profile
	I0828 10:34:56.512560    4094 start.go:297] selected driver: qemu2
	I0828 10:34:56.512569    4094 start.go:901] validating driver "qemu2" against &{Name:multinode-223000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-223000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:34:56.512653    4094 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:34:56.514995    4094 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:34:56.515040    4094 cni.go:84] Creating CNI manager for ""
	I0828 10:34:56.515047    4094 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0828 10:34:56.515101    4094 start.go:340] cluster config:
	{Name:multinode-223000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-223000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:34:56.518835    4094 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:34:56.526561    4094 out.go:177] * Starting "multinode-223000" primary control-plane node in "multinode-223000" cluster
	I0828 10:34:56.530531    4094 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:34:56.530544    4094 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:34:56.530552    4094 cache.go:56] Caching tarball of preloaded images
	I0828 10:34:56.530614    4094 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:34:56.530619    4094 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:34:56.530673    4094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/multinode-223000/config.json ...
	I0828 10:34:56.531136    4094 start.go:360] acquireMachinesLock for multinode-223000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:34:56.531173    4094 start.go:364] duration metric: took 30.416µs to acquireMachinesLock for "multinode-223000"
	I0828 10:34:56.531184    4094 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:34:56.531193    4094 fix.go:54] fixHost starting: 
	I0828 10:34:56.531321    4094 fix.go:112] recreateIfNeeded on multinode-223000: state=Stopped err=<nil>
	W0828 10:34:56.531330    4094 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:34:56.535598    4094 out.go:177] * Restarting existing qemu2 VM for "multinode-223000" ...
	I0828 10:34:56.539585    4094 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:34:56.539626    4094 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:f1:89:95:f0:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/disk.qcow2
	I0828 10:34:56.541798    4094 main.go:141] libmachine: STDOUT: 
	I0828 10:34:56.541822    4094 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:34:56.541852    4094 fix.go:56] duration metric: took 10.661959ms for fixHost
	I0828 10:34:56.541857    4094 start.go:83] releasing machines lock for "multinode-223000", held for 10.679041ms
	W0828 10:34:56.541865    4094 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:34:56.541906    4094 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:34:56.541915    4094 start.go:729] Will try again in 5 seconds ...
	I0828 10:35:01.543944    4094 start.go:360] acquireMachinesLock for multinode-223000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:35:01.544451    4094 start.go:364] duration metric: took 364µs to acquireMachinesLock for "multinode-223000"
	I0828 10:35:01.544585    4094 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:35:01.544605    4094 fix.go:54] fixHost starting: 
	I0828 10:35:01.545323    4094 fix.go:112] recreateIfNeeded on multinode-223000: state=Stopped err=<nil>
	W0828 10:35:01.545351    4094 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:35:01.549812    4094 out.go:177] * Restarting existing qemu2 VM for "multinode-223000" ...
	I0828 10:35:01.553809    4094 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:35:01.553988    4094 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:f1:89:95:f0:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/disk.qcow2
	I0828 10:35:01.563271    4094 main.go:141] libmachine: STDOUT: 
	I0828 10:35:01.563342    4094 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:35:01.563428    4094 fix.go:56] duration metric: took 18.825334ms for fixHost
	I0828 10:35:01.563450    4094 start.go:83] releasing machines lock for "multinode-223000", held for 18.973292ms
	W0828 10:35:01.563603    4094 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-223000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-223000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:35:01.570763    4094 out.go:201] 
	W0828 10:35:01.574846    4094 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:35:01.574875    4094 out.go:270] * 
	* 
	W0828 10:35:01.577437    4094 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:35:01.585727    4094 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-223000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-223000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000: exit status 7 (33.179292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-223000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-223000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-223000 node delete m03: exit status 83 (41.16275ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-223000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-223000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-223000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-223000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-223000 status --alsologtostderr: exit status 7 (30.766625ms)

                                                
                                                
-- stdout --
	multinode-223000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:35:01.771554    4108 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:35:01.771697    4108 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:35:01.771700    4108 out.go:358] Setting ErrFile to fd 2...
	I0828 10:35:01.771703    4108 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:35:01.771839    4108 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:35:01.771967    4108 out.go:352] Setting JSON to false
	I0828 10:35:01.771978    4108 mustload.go:65] Loading cluster: multinode-223000
	I0828 10:35:01.772031    4108 notify.go:220] Checking for updates...
	I0828 10:35:01.772174    4108 config.go:182] Loaded profile config "multinode-223000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:35:01.772182    4108 status.go:255] checking status of multinode-223000 ...
	I0828 10:35:01.772401    4108 status.go:330] multinode-223000 host status = "Stopped" (err=<nil>)
	I0828 10:35:01.772405    4108 status.go:343] host is not running, skipping remaining checks
	I0828 10:35:01.772407    4108 status.go:257] multinode-223000 status: &{Name:multinode-223000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-223000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000: exit status 7 (31.426292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-223000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-223000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-223000 stop: (1.982180125s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-223000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-223000 status: exit status 7 (67.048ms)

                                                
                                                
-- stdout --
	multinode-223000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-223000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-223000 status --alsologtostderr: exit status 7 (33.214959ms)

                                                
                                                
-- stdout --
	multinode-223000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:35:03.886058    4126 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:35:03.886207    4126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:35:03.886211    4126 out.go:358] Setting ErrFile to fd 2...
	I0828 10:35:03.886213    4126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:35:03.886341    4126 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:35:03.886460    4126 out.go:352] Setting JSON to false
	I0828 10:35:03.886472    4126 mustload.go:65] Loading cluster: multinode-223000
	I0828 10:35:03.886532    4126 notify.go:220] Checking for updates...
	I0828 10:35:03.886664    4126 config.go:182] Loaded profile config "multinode-223000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:35:03.886671    4126 status.go:255] checking status of multinode-223000 ...
	I0828 10:35:03.886884    4126 status.go:330] multinode-223000 host status = "Stopped" (err=<nil>)
	I0828 10:35:03.886887    4126 status.go:343] host is not running, skipping remaining checks
	I0828 10:35:03.886889    4126 status.go:257] multinode-223000 status: &{Name:multinode-223000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-223000 status --alsologtostderr": multinode-223000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-223000 status --alsologtostderr": multinode-223000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000: exit status 7 (31.325167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-223000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-223000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-223000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.182098625s)

                                                
                                                
-- stdout --
	* [multinode-223000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-223000" primary control-plane node in "multinode-223000" cluster
	* Restarting existing qemu2 VM for "multinode-223000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-223000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:35:03.948052    4130 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:35:03.948177    4130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:35:03.948181    4130 out.go:358] Setting ErrFile to fd 2...
	I0828 10:35:03.948183    4130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:35:03.948317    4130 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:35:03.949318    4130 out.go:352] Setting JSON to false
	I0828 10:35:03.965272    4130 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3867,"bootTime":1724862636,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:35:03.965354    4130 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:35:03.970330    4130 out.go:177] * [multinode-223000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:35:03.977146    4130 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:35:03.977186    4130 notify.go:220] Checking for updates...
	I0828 10:35:03.985268    4130 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:35:03.989168    4130 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:35:03.992292    4130 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:35:03.995283    4130 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:35:03.998217    4130 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:35:04.001519    4130 config.go:182] Loaded profile config "multinode-223000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:35:04.001794    4130 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:35:04.006299    4130 out.go:177] * Using the qemu2 driver based on existing profile
	I0828 10:35:04.013243    4130 start.go:297] selected driver: qemu2
	I0828 10:35:04.013247    4130 start.go:901] validating driver "qemu2" against &{Name:multinode-223000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-223000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:35:04.013310    4130 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:35:04.015527    4130 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:35:04.015567    4130 cni.go:84] Creating CNI manager for ""
	I0828 10:35:04.015572    4130 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0828 10:35:04.015623    4130 start.go:340] cluster config:
	{Name:multinode-223000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-223000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:35:04.019092    4130 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:35:04.026284    4130 out.go:177] * Starting "multinode-223000" primary control-plane node in "multinode-223000" cluster
	I0828 10:35:04.030338    4130 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:35:04.030354    4130 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:35:04.030364    4130 cache.go:56] Caching tarball of preloaded images
	I0828 10:35:04.030422    4130 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:35:04.030430    4130 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:35:04.030491    4130 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/multinode-223000/config.json ...
	I0828 10:35:04.030952    4130 start.go:360] acquireMachinesLock for multinode-223000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:35:04.030980    4130 start.go:364] duration metric: took 22.209µs to acquireMachinesLock for "multinode-223000"
	I0828 10:35:04.030990    4130 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:35:04.030996    4130 fix.go:54] fixHost starting: 
	I0828 10:35:04.031121    4130 fix.go:112] recreateIfNeeded on multinode-223000: state=Stopped err=<nil>
	W0828 10:35:04.031129    4130 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:35:04.035209    4130 out.go:177] * Restarting existing qemu2 VM for "multinode-223000" ...
	I0828 10:35:04.043318    4130 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:35:04.043363    4130 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:f1:89:95:f0:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/disk.qcow2
	I0828 10:35:04.045371    4130 main.go:141] libmachine: STDOUT: 
	I0828 10:35:04.045396    4130 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:35:04.045425    4130 fix.go:56] duration metric: took 14.430291ms for fixHost
	I0828 10:35:04.045430    4130 start.go:83] releasing machines lock for "multinode-223000", held for 14.445458ms
	W0828 10:35:04.045437    4130 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:35:04.045480    4130 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:35:04.045485    4130 start.go:729] Will try again in 5 seconds ...
	I0828 10:35:09.047473    4130 start.go:360] acquireMachinesLock for multinode-223000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:35:09.047873    4130 start.go:364] duration metric: took 293.625µs to acquireMachinesLock for "multinode-223000"
	I0828 10:35:09.047997    4130 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:35:09.048015    4130 fix.go:54] fixHost starting: 
	I0828 10:35:09.048662    4130 fix.go:112] recreateIfNeeded on multinode-223000: state=Stopped err=<nil>
	W0828 10:35:09.048687    4130 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:35:09.052034    4130 out.go:177] * Restarting existing qemu2 VM for "multinode-223000" ...
	I0828 10:35:09.056047    4130 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:35:09.056326    4130 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:f1:89:95:f0:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/multinode-223000/disk.qcow2
	I0828 10:35:09.065229    4130 main.go:141] libmachine: STDOUT: 
	I0828 10:35:09.065310    4130 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:35:09.065373    4130 fix.go:56] duration metric: took 17.3565ms for fixHost
	I0828 10:35:09.065391    4130 start.go:83] releasing machines lock for "multinode-223000", held for 17.494291ms
	W0828 10:35:09.065570    4130 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-223000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-223000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:35:09.073007    4130 out.go:201] 
	W0828 10:35:09.077107    4130 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:35:09.077135    4130 out.go:270] * 
	* 
	W0828 10:35:09.079825    4130 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:35:09.088068    4130 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-223000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000: exit status 7 (69.182208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-223000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-223000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-223000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-223000-m01 --driver=qemu2 : exit status 80 (10.013661209s)

                                                
                                                
-- stdout --
	* [multinode-223000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-223000-m01" primary control-plane node in "multinode-223000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-223000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-223000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-223000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-223000-m02 --driver=qemu2 : exit status 80 (10.054079708s)

                                                
                                                
-- stdout --
	* [multinode-223000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-223000-m02" primary control-plane node in "multinode-223000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-223000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-223000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-223000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-223000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-223000: exit status 83 (85.430916ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-223000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-223000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-223000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-223000 -n multinode-223000: exit status 7 (32.033417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-223000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.30s)

                                                
                                    
x
+
TestPreload (10.29s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-260000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-260000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.11921775s)

                                                
                                                
-- stdout --
	* [test-preload-260000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-260000" primary control-plane node in "test-preload-260000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-260000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:35:29.611788    4183 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:35:29.611915    4183 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:35:29.611918    4183 out.go:358] Setting ErrFile to fd 2...
	I0828 10:35:29.611920    4183 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:35:29.612053    4183 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:35:29.613139    4183 out.go:352] Setting JSON to false
	I0828 10:35:29.629455    4183 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3893,"bootTime":1724862636,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:35:29.629522    4183 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:35:29.636282    4183 out.go:177] * [test-preload-260000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:35:29.644234    4183 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:35:29.644264    4183 notify.go:220] Checking for updates...
	I0828 10:35:29.651226    4183 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:35:29.654196    4183 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:35:29.657212    4183 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:35:29.660245    4183 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:35:29.663196    4183 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:35:29.666599    4183 config.go:182] Loaded profile config "multinode-223000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:35:29.666662    4183 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:35:29.671244    4183 out.go:177] * Using the qemu2 driver based on user configuration
	I0828 10:35:29.678201    4183 start.go:297] selected driver: qemu2
	I0828 10:35:29.678207    4183 start.go:901] validating driver "qemu2" against <nil>
	I0828 10:35:29.678216    4183 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:35:29.680632    4183 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 10:35:29.684201    4183 out.go:177] * Automatically selected the socket_vmnet network
	I0828 10:35:29.687220    4183 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:35:29.687246    4183 cni.go:84] Creating CNI manager for ""
	I0828 10:35:29.687255    4183 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:35:29.687264    4183 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 10:35:29.687286    4183 start.go:340] cluster config:
	{Name:test-preload-260000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-260000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:35:29.691136    4183 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:35:29.698228    4183 out.go:177] * Starting "test-preload-260000" primary control-plane node in "test-preload-260000" cluster
	I0828 10:35:29.702250    4183 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0828 10:35:29.702333    4183 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/test-preload-260000/config.json ...
	I0828 10:35:29.702349    4183 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/test-preload-260000/config.json: {Name:mk9055072b047ec25c4bf446e188799861c48a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:35:29.702355    4183 cache.go:107] acquiring lock: {Name:mkf538eb0d7aa9fae1b842e5b9bb6f64b5f3d04f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:35:29.702355    4183 cache.go:107] acquiring lock: {Name:mkc657ef571a84aaae610bd3949cd4e439f0e635 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:35:29.702386    4183 cache.go:107] acquiring lock: {Name:mkff44c549d8bca72564b9cda68cf88ecbc2a24a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:35:29.702507    4183 cache.go:107] acquiring lock: {Name:mk1c96e5c05baa7a3852361bef71318a0c5ffc36 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:35:29.702524    4183 cache.go:107] acquiring lock: {Name:mk8154cb86411eabc22f808848b29b66e2684721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:35:29.702532    4183 cache.go:107] acquiring lock: {Name:mkf9b37f12942f4bfaecce1e8c7aa7237506c98b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:35:29.702534    4183 cache.go:107] acquiring lock: {Name:mkce1003a88ca609f02dedfb25b21100ac1c9690 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:35:29.702590    4183 cache.go:107] acquiring lock: {Name:mkcd33b7a15fdd120510cdfda8c235ac389504b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:35:29.702671    4183 start.go:360] acquireMachinesLock for test-preload-260000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:35:29.702717    4183 start.go:364] duration metric: took 35.667µs to acquireMachinesLock for "test-preload-260000"
	I0828 10:35:29.702728    4183 start.go:93] Provisioning new machine with config: &{Name:test-preload-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-260000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:35:29.702775    4183 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:35:29.702930    4183 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0828 10:35:29.702940    4183 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0828 10:35:29.702943    4183 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0828 10:35:29.703304    4183 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0828 10:35:29.703318    4183 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0828 10:35:29.703341    4183 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0828 10:35:29.703333    4183 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0828 10:35:29.706112    4183 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 10:35:29.706421    4183 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 10:35:29.710702    4183 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0828 10:35:29.713676    4183 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0828 10:35:29.713711    4183 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0828 10:35:29.713784    4183 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0828 10:35:29.713830    4183 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0828 10:35:29.713971    4183 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0828 10:35:29.714029    4183 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0828 10:35:29.715488    4183 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 10:35:29.723731    4183 start.go:159] libmachine.API.Create for "test-preload-260000" (driver="qemu2")
	I0828 10:35:29.723756    4183 client.go:168] LocalClient.Create starting
	I0828 10:35:29.723851    4183 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:35:29.723885    4183 main.go:141] libmachine: Decoding PEM data...
	I0828 10:35:29.723897    4183 main.go:141] libmachine: Parsing certificate...
	I0828 10:35:29.723935    4183 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:35:29.723963    4183 main.go:141] libmachine: Decoding PEM data...
	I0828 10:35:29.723972    4183 main.go:141] libmachine: Parsing certificate...
	I0828 10:35:29.724328    4183 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:35:29.934811    4183 main.go:141] libmachine: Creating SSH key...
	I0828 10:35:30.129518    4183 main.go:141] libmachine: Creating Disk image...
	I0828 10:35:30.129550    4183 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:35:30.129739    4183 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/test-preload-260000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/test-preload-260000/disk.qcow2
	I0828 10:35:30.139530    4183 main.go:141] libmachine: STDOUT: 
	I0828 10:35:30.139551    4183 main.go:141] libmachine: STDERR: 
	I0828 10:35:30.139592    4183 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/test-preload-260000/disk.qcow2 +20000M
	I0828 10:35:30.147922    4183 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:35:30.147938    4183 main.go:141] libmachine: STDERR: 
	I0828 10:35:30.147955    4183 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/test-preload-260000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/test-preload-260000/disk.qcow2
	I0828 10:35:30.147959    4183 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:35:30.147970    4183 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:35:30.147993    4183 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/test-preload-260000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/test-preload-260000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/test-preload-260000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:e9:53:4b:9f:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/test-preload-260000/disk.qcow2
	I0828 10:35:30.149744    4183 main.go:141] libmachine: STDOUT: 
	I0828 10:35:30.149760    4183 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:35:30.149776    4183 client.go:171] duration metric: took 426.031583ms to LocalClient.Create
	W0828 10:35:30.749281    4183 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0828 10:35:30.749383    4183 cache.go:162] opening:  /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0828 10:35:30.818971    4183 cache.go:162] opening:  /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0828 10:35:30.837183    4183 cache.go:162] opening:  /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0828 10:35:30.849400    4183 cache.go:162] opening:  /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0828 10:35:30.977609    4183 cache.go:162] opening:  /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0828 10:35:30.995589    4183 cache.go:157] /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0828 10:35:30.995639    4183 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.293180208s
	I0828 10:35:30.995672    4183 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0828 10:35:31.030240    4183 cache.go:162] opening:  /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0828 10:35:31.047580    4183 cache.go:162] opening:  /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0828 10:35:31.307740    4183 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0828 10:35:31.307842    4183 cache.go:162] opening:  /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0828 10:35:32.149904    4183 start.go:128] duration metric: took 2.447166583s to createHost
	I0828 10:35:32.149971    4183 start.go:83] releasing machines lock for "test-preload-260000", held for 2.447334166s
	W0828 10:35:32.150036    4183 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:35:32.165935    4183 out.go:177] * Deleting "test-preload-260000" in qemu2 ...
	W0828 10:35:32.199199    4183 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:35:32.199236    4183 start.go:729] Will try again in 5 seconds ...
	I0828 10:35:32.308947    4183 cache.go:157] /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0828 10:35:32.308993    4183 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.606733292s
	I0828 10:35:32.309020    4183 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0828 10:35:33.276132    4183 cache.go:157] /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0828 10:35:33.276184    4183 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.573946542s
	I0828 10:35:33.276211    4183 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0828 10:35:33.310770    4183 cache.go:157] /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0828 10:35:33.310813    4183 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 3.608588875s
	I0828 10:35:33.310839    4183 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0828 10:35:33.493463    4183 cache.go:157] /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0828 10:35:33.493513    4183 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.791138833s
	I0828 10:35:33.493540    4183 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0828 10:35:35.690303    4183 cache.go:157] /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0828 10:35:35.690348    4183 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.9880475s
	I0828 10:35:35.690376    4183 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0828 10:35:36.234526    4183 cache.go:157] /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0828 10:35:36.234571    4183 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.532278083s
	I0828 10:35:36.234601    4183 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0828 10:35:37.199365    4183 start.go:360] acquireMachinesLock for test-preload-260000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:35:37.199768    4183 start.go:364] duration metric: took 326.5µs to acquireMachinesLock for "test-preload-260000"
	I0828 10:35:37.199882    4183 start.go:93] Provisioning new machine with config: &{Name:test-preload-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-260000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:35:37.200103    4183 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:35:37.205755    4183 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 10:35:37.257752    4183 start.go:159] libmachine.API.Create for "test-preload-260000" (driver="qemu2")
	I0828 10:35:37.257794    4183 client.go:168] LocalClient.Create starting
	I0828 10:35:37.257933    4183 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:35:37.258000    4183 main.go:141] libmachine: Decoding PEM data...
	I0828 10:35:37.258020    4183 main.go:141] libmachine: Parsing certificate...
	I0828 10:35:37.258101    4183 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:35:37.258148    4183 main.go:141] libmachine: Decoding PEM data...
	I0828 10:35:37.258182    4183 main.go:141] libmachine: Parsing certificate...
	I0828 10:35:37.258726    4183 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:35:37.430398    4183 main.go:141] libmachine: Creating SSH key...
	I0828 10:35:37.634885    4183 main.go:141] libmachine: Creating Disk image...
	I0828 10:35:37.634894    4183 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:35:37.635088    4183 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/test-preload-260000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/test-preload-260000/disk.qcow2
	I0828 10:35:37.644759    4183 main.go:141] libmachine: STDOUT: 
	I0828 10:35:37.644780    4183 main.go:141] libmachine: STDERR: 
	I0828 10:35:37.644860    4183 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/test-preload-260000/disk.qcow2 +20000M
	I0828 10:35:37.652966    4183 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:35:37.652982    4183 main.go:141] libmachine: STDERR: 
	I0828 10:35:37.652993    4183 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/test-preload-260000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/test-preload-260000/disk.qcow2
	I0828 10:35:37.652999    4183 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:35:37.653011    4183 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:35:37.653046    4183 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/test-preload-260000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/test-preload-260000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/test-preload-260000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:95:c0:64:3b:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/test-preload-260000/disk.qcow2
	I0828 10:35:37.654829    4183 main.go:141] libmachine: STDOUT: 
	I0828 10:35:37.654844    4183 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:35:37.654858    4183 client.go:171] duration metric: took 397.074041ms to LocalClient.Create
	I0828 10:35:39.654986    4183 start.go:128] duration metric: took 2.454920167s to createHost
	I0828 10:35:39.655037    4183 start.go:83] releasing machines lock for "test-preload-260000", held for 2.455336291s
	W0828 10:35:39.655352    4183 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-260000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-260000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:35:39.664836    4183 out.go:201] 
	W0828 10:35:39.675123    4183 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:35:39.675179    4183 out.go:270] * 
	* 
	W0828 10:35:39.677865    4183 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:35:39.686934    4183 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-260000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-28 10:35:39.704801 -0700 PDT m=+2711.623563459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-260000 -n test-preload-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-260000 -n test-preload-260000: exit status 7 (69.592208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-260000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-260000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-260000
--- FAIL: TestPreload (10.29s)

                                                
                                    
x
+
TestScheduledStopUnix (10.13s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-571000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-571000 --memory=2048 --driver=qemu2 : exit status 80 (9.978682458s)

                                                
                                                
-- stdout --
	* [scheduled-stop-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-571000" primary control-plane node in "scheduled-stop-571000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-571000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-571000" primary control-plane node in "scheduled-stop-571000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-571000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-28 10:35:49.851617 -0700 PDT m=+2721.770745959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-571000 -n scheduled-stop-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-571000 -n scheduled-stop-571000: exit status 7 (69.122833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-571000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-571000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-571000
--- FAIL: TestScheduledStopUnix (10.13s)

                                                
                                    
x
+
TestSkaffold (13.95s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3381500250 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3381500250 version: (1.046541208s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-845000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-845000 --memory=2600 --driver=qemu2 : exit status 80 (9.821369167s)

                                                
                                                
-- stdout --
	* [skaffold-845000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-845000" primary control-plane node in "skaffold-845000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-845000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-845000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-845000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-845000" primary control-plane node in "skaffold-845000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-845000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-845000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-28 10:36:03.810873 -0700 PDT m=+2735.730505417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-845000 -n skaffold-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-845000 -n skaffold-845000: exit status 7 (62.055083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-845000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-845000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-845000
--- FAIL: TestSkaffold (13.95s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (599.86s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.540948566 start -p running-upgrade-717000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.540948566 start -p running-upgrade-717000 --memory=2200 --vm-driver=qemu2 : (58.89711675s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-717000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0828 10:38:50.793967    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:39:10.230135    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-717000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m25.8058665s)

                                                
                                                
-- stdout --
	* [running-upgrade-717000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-717000" primary control-plane node in "running-upgrade-717000" cluster
	* Updating the running qemu2 "running-upgrade-717000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:37:46.650664    4578 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:37:46.650800    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:37:46.650805    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:37:46.650808    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:37:46.650941    4578 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:37:46.652007    4578 out.go:352] Setting JSON to false
	I0828 10:37:46.669146    4578 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4030,"bootTime":1724862636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:37:46.669217    4578 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:37:46.674367    4578 out.go:177] * [running-upgrade-717000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:37:46.681277    4578 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:37:46.681315    4578 notify.go:220] Checking for updates...
	I0828 10:37:46.689253    4578 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:37:46.693349    4578 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:37:46.694747    4578 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:37:46.698288    4578 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:37:46.701284    4578 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:37:46.704602    4578 config.go:182] Loaded profile config "running-upgrade-717000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0828 10:37:46.708334    4578 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0828 10:37:46.711251    4578 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:37:46.715307    4578 out.go:177] * Using the qemu2 driver based on existing profile
	I0828 10:37:46.722271    4578 start.go:297] selected driver: qemu2
	I0828 10:37:46.722276    4578 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-717000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50293 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-717000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0828 10:37:46.722322    4578 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:37:46.724809    4578 cni.go:84] Creating CNI manager for ""
	I0828 10:37:46.724832    4578 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:37:46.724852    4578 start.go:340] cluster config:
	{Name:running-upgrade-717000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50293 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-717000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0828 10:37:46.724904    4578 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:37:46.732285    4578 out.go:177] * Starting "running-upgrade-717000" primary control-plane node in "running-upgrade-717000" cluster
	I0828 10:37:46.736318    4578 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0828 10:37:46.736343    4578 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0828 10:37:46.736348    4578 cache.go:56] Caching tarball of preloaded images
	I0828 10:37:46.736405    4578 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:37:46.736410    4578 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0828 10:37:46.736456    4578 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/running-upgrade-717000/config.json ...
	I0828 10:37:46.736962    4578 start.go:360] acquireMachinesLock for running-upgrade-717000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:37:46.736995    4578 start.go:364] duration metric: took 27.375µs to acquireMachinesLock for "running-upgrade-717000"
	I0828 10:37:46.737004    4578 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:37:46.737010    4578 fix.go:54] fixHost starting: 
	I0828 10:37:46.737639    4578 fix.go:112] recreateIfNeeded on running-upgrade-717000: state=Running err=<nil>
	W0828 10:37:46.737646    4578 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:37:46.746271    4578 out.go:177] * Updating the running qemu2 "running-upgrade-717000" VM ...
	I0828 10:37:46.749157    4578 machine.go:93] provisionDockerMachine start ...
	I0828 10:37:46.749193    4578 main.go:141] libmachine: Using SSH client type: native
	I0828 10:37:46.749311    4578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030c85a0] 0x1030cae00 <nil>  [] 0s} localhost 50261 <nil> <nil>}
	I0828 10:37:46.749316    4578 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 10:37:46.823080    4578 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-717000
	
	I0828 10:37:46.823095    4578 buildroot.go:166] provisioning hostname "running-upgrade-717000"
	I0828 10:37:46.823135    4578 main.go:141] libmachine: Using SSH client type: native
	I0828 10:37:46.823250    4578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030c85a0] 0x1030cae00 <nil>  [] 0s} localhost 50261 <nil> <nil>}
	I0828 10:37:46.823255    4578 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-717000 && echo "running-upgrade-717000" | sudo tee /etc/hostname
	I0828 10:37:46.902594    4578 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-717000
	
	I0828 10:37:46.902648    4578 main.go:141] libmachine: Using SSH client type: native
	I0828 10:37:46.902762    4578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030c85a0] 0x1030cae00 <nil>  [] 0s} localhost 50261 <nil> <nil>}
	I0828 10:37:46.902772    4578 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-717000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-717000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-717000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 10:37:46.973360    4578 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 10:37:46.973374    4578 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19529-1176/.minikube CaCertPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19529-1176/.minikube}
	I0828 10:37:46.973383    4578 buildroot.go:174] setting up certificates
	I0828 10:37:46.973387    4578 provision.go:84] configureAuth start
	I0828 10:37:46.973394    4578 provision.go:143] copyHostCerts
	I0828 10:37:46.973451    4578 exec_runner.go:144] found /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.pem, removing ...
	I0828 10:37:46.973456    4578 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.pem
	I0828 10:37:46.973572    4578 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.pem (1078 bytes)
	I0828 10:37:46.973753    4578 exec_runner.go:144] found /Users/jenkins/minikube-integration/19529-1176/.minikube/cert.pem, removing ...
	I0828 10:37:46.973756    4578 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19529-1176/.minikube/cert.pem
	I0828 10:37:46.973802    4578 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19529-1176/.minikube/cert.pem (1123 bytes)
	I0828 10:37:46.973906    4578 exec_runner.go:144] found /Users/jenkins/minikube-integration/19529-1176/.minikube/key.pem, removing ...
	I0828 10:37:46.973909    4578 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19529-1176/.minikube/key.pem
	I0828 10:37:46.973947    4578 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19529-1176/.minikube/key.pem (1679 bytes)
	I0828 10:37:46.974040    4578 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-717000 san=[127.0.0.1 localhost minikube running-upgrade-717000]
	I0828 10:37:47.089006    4578 provision.go:177] copyRemoteCerts
	I0828 10:37:47.089046    4578 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 10:37:47.089053    4578 sshutil.go:53] new ssh client: &{IP:localhost Port:50261 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/running-upgrade-717000/id_rsa Username:docker}
	I0828 10:37:47.128577    4578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 10:37:47.135551    4578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0828 10:37:47.142620    4578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0828 10:37:47.149125    4578 provision.go:87] duration metric: took 175.738583ms to configureAuth
	I0828 10:37:47.149134    4578 buildroot.go:189] setting minikube options for container-runtime
	I0828 10:37:47.149245    4578 config.go:182] Loaded profile config "running-upgrade-717000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0828 10:37:47.149277    4578 main.go:141] libmachine: Using SSH client type: native
	I0828 10:37:47.149368    4578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030c85a0] 0x1030cae00 <nil>  [] 0s} localhost 50261 <nil> <nil>}
	I0828 10:37:47.149373    4578 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0828 10:37:47.220533    4578 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0828 10:37:47.220545    4578 buildroot.go:70] root file system type: tmpfs
	I0828 10:37:47.220593    4578 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0828 10:37:47.220641    4578 main.go:141] libmachine: Using SSH client type: native
	I0828 10:37:47.220746    4578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030c85a0] 0x1030cae00 <nil>  [] 0s} localhost 50261 <nil> <nil>}
	I0828 10:37:47.220778    4578 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0828 10:37:47.295036    4578 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0828 10:37:47.295089    4578 main.go:141] libmachine: Using SSH client type: native
	I0828 10:37:47.295202    4578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030c85a0] 0x1030cae00 <nil>  [] 0s} localhost 50261 <nil> <nil>}
	I0828 10:37:47.295209    4578 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0828 10:37:47.369894    4578 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 10:37:47.369906    4578 machine.go:96] duration metric: took 620.765ms to provisionDockerMachine
	I0828 10:37:47.369912    4578 start.go:293] postStartSetup for "running-upgrade-717000" (driver="qemu2")
	I0828 10:37:47.369920    4578 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 10:37:47.369974    4578 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 10:37:47.369983    4578 sshutil.go:53] new ssh client: &{IP:localhost Port:50261 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/running-upgrade-717000/id_rsa Username:docker}
	I0828 10:37:47.408910    4578 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 10:37:47.410359    4578 info.go:137] Remote host: Buildroot 2021.02.12
	I0828 10:37:47.410367    4578 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19529-1176/.minikube/addons for local assets ...
	I0828 10:37:47.410454    4578 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19529-1176/.minikube/files for local assets ...
	I0828 10:37:47.410544    4578 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19529-1176/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0828 10:37:47.410642    4578 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 10:37:47.413171    4578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0828 10:37:47.420022    4578 start.go:296] duration metric: took 50.106417ms for postStartSetup
	I0828 10:37:47.420044    4578 fix.go:56] duration metric: took 683.055083ms for fixHost
	I0828 10:37:47.420076    4578 main.go:141] libmachine: Using SSH client type: native
	I0828 10:37:47.420175    4578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030c85a0] 0x1030cae00 <nil>  [] 0s} localhost 50261 <nil> <nil>}
	I0828 10:37:47.420179    4578 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 10:37:47.490935    4578 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724866667.880454930
	
	I0828 10:37:47.490946    4578 fix.go:216] guest clock: 1724866667.880454930
	I0828 10:37:47.490950    4578 fix.go:229] Guest: 2024-08-28 10:37:47.88045493 -0700 PDT Remote: 2024-08-28 10:37:47.420048 -0700 PDT m=+0.788871668 (delta=460.40693ms)
	I0828 10:37:47.490967    4578 fix.go:200] guest clock delta is within tolerance: 460.40693ms
	I0828 10:37:47.490973    4578 start.go:83] releasing machines lock for "running-upgrade-717000", held for 753.998083ms
	I0828 10:37:47.491038    4578 ssh_runner.go:195] Run: cat /version.json
	I0828 10:37:47.491049    4578 sshutil.go:53] new ssh client: &{IP:localhost Port:50261 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/running-upgrade-717000/id_rsa Username:docker}
	I0828 10:37:47.491038    4578 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 10:37:47.491089    4578 sshutil.go:53] new ssh client: &{IP:localhost Port:50261 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/running-upgrade-717000/id_rsa Username:docker}
	W0828 10:37:47.491632    4578 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50261: connect: connection refused
	I0828 10:37:47.491667    4578 retry.go:31] will retry after 289.942564ms: dial tcp [::1]:50261: connect: connection refused
	W0828 10:37:47.526974    4578 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0828 10:37:47.527022    4578 ssh_runner.go:195] Run: systemctl --version
	I0828 10:37:47.528872    4578 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 10:37:47.530653    4578 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 10:37:47.530676    4578 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0828 10:37:47.533425    4578 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0828 10:37:47.537919    4578 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 10:37:47.537926    4578 start.go:495] detecting cgroup driver to use...
	I0828 10:37:47.537993    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 10:37:47.543013    4578 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0828 10:37:47.545771    4578 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0828 10:37:47.548845    4578 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0828 10:37:47.548865    4578 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0828 10:37:47.551928    4578 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0828 10:37:47.554827    4578 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0828 10:37:47.557608    4578 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0828 10:37:47.560495    4578 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 10:37:47.563725    4578 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0828 10:37:47.566548    4578 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0828 10:37:47.569271    4578 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0828 10:37:47.572818    4578 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 10:37:47.575878    4578 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 10:37:47.578560    4578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:37:47.673285    4578 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0828 10:37:47.682666    4578 start.go:495] detecting cgroup driver to use...
	I0828 10:37:47.682728    4578 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0828 10:37:47.689458    4578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 10:37:47.694949    4578 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 10:37:47.707068    4578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 10:37:47.711369    4578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0828 10:37:47.715683    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 10:37:47.721021    4578 ssh_runner.go:195] Run: which cri-dockerd
	I0828 10:37:47.722597    4578 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0828 10:37:47.725054    4578 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0828 10:37:47.730067    4578 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0828 10:37:47.826171    4578 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0828 10:37:47.920889    4578 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0828 10:37:47.920942    4578 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0828 10:37:47.926737    4578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:37:48.016881    4578 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0828 10:37:49.530391    4578 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.513546917s)
	I0828 10:37:49.530449    4578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0828 10:37:49.534991    4578 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0828 10:37:49.541273    4578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0828 10:37:49.545770    4578 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0828 10:37:49.624946    4578 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0828 10:37:49.702046    4578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:37:49.768941    4578 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0828 10:37:49.775481    4578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0828 10:37:49.780148    4578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:37:49.856821    4578 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0828 10:37:49.904198    4578 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0828 10:37:49.904278    4578 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0828 10:37:49.906618    4578 start.go:563] Will wait 60s for crictl version
	I0828 10:37:49.906666    4578 ssh_runner.go:195] Run: which crictl
	I0828 10:37:49.908185    4578 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 10:37:49.920322    4578 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0828 10:37:49.920383    4578 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0828 10:37:49.933043    4578 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0828 10:37:49.953914    4578 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0828 10:37:49.954035    4578 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0828 10:37:49.955363    4578 kubeadm.go:883] updating cluster {Name:running-upgrade-717000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50293 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-717000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0828 10:37:49.955414    4578 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0828 10:37:49.955452    4578 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0828 10:37:49.966010    4578 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0828 10:37:49.966019    4578 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0828 10:37:49.966064    4578 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0828 10:37:49.969226    4578 ssh_runner.go:195] Run: which lz4
	I0828 10:37:49.970465    4578 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 10:37:49.971580    4578 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 10:37:49.971592    4578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0828 10:37:50.922180    4578 docker.go:649] duration metric: took 951.777083ms to copy over tarball
	I0828 10:37:50.922230    4578 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 10:37:52.113492    4578 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.191289667s)
	I0828 10:37:52.113505    4578 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 10:37:52.129276    4578 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0828 10:37:52.132211    4578 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0828 10:37:52.137259    4578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:37:52.222413    4578 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0828 10:37:52.560790    4578 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0828 10:37:52.581390    4578 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0828 10:37:52.581398    4578 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0828 10:37:52.581404    4578 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0828 10:37:52.585124    4578 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 10:37:52.587124    4578 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0828 10:37:52.589005    4578 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 10:37:52.589559    4578 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0828 10:37:52.591540    4578 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0828 10:37:52.592074    4578 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0828 10:37:52.593519    4578 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0828 10:37:52.593792    4578 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0828 10:37:52.595005    4578 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0828 10:37:52.595049    4578 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0828 10:37:52.596192    4578 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0828 10:37:52.596664    4578 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0828 10:37:52.597541    4578 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0828 10:37:52.597625    4578 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0828 10:37:52.598279    4578 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0828 10:37:52.598985    4578 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0828 10:37:53.689553    4578 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0828 10:37:53.691589    4578 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0828 10:37:53.713115    4578 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	W0828 10:37:53.734049    4578 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0828 10:37:53.734294    4578 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0828 10:37:53.737157    4578 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0828 10:37:53.737197    4578 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0828 10:37:53.737219    4578 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0828 10:37:53.737197    4578 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0828 10:37:53.737275    4578 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0828 10:37:53.737302    4578 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0828 10:37:53.763840    4578 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0828 10:37:53.763866    4578 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0828 10:37:53.763930    4578 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0828 10:37:53.764250    4578 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0828 10:37:53.764262    4578 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0828 10:37:53.764286    4578 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0828 10:37:53.771025    4578 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0828 10:37:53.776998    4578 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	W0828 10:37:53.779735    4578 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0828 10:37:53.779827    4578 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 10:37:53.791563    4578 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0828 10:37:53.795096    4578 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0828 10:37:53.795208    4578 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0828 10:37:53.801432    4578 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0828 10:37:53.801453    4578 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 10:37:53.801475    4578 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0828 10:37:53.801492    4578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0828 10:37:53.801505    4578 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 10:37:53.808259    4578 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0828 10:37:53.820216    4578 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0828 10:37:53.836697    4578 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0828 10:37:53.873149    4578 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0828 10:37:53.873164    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0828 10:37:55.326708    4578 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.525233667s)
	I0828 10:37:55.326741    4578 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0828 10:37:55.326823    4578 ssh_runner.go:235] Completed: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0: (1.518596666s)
	I0828 10:37:55.326881    4578 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0828 10:37:55.326923    4578 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0828 10:37:55.326932    4578 ssh_runner.go:235] Completed: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1: (1.5067455s)
	I0828 10:37:55.326971    4578 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0828 10:37:55.327010    4578 ssh_runner.go:235] Completed: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7: (1.490339875s)
	I0828 10:37:55.327049    4578 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0828 10:37:55.327126    4578 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0828 10:37:55.327790    4578 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0828 10:37:55.327932    4578 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0828 10:37:55.327939    4578 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0828 10:37:55.327961    4578 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0828 10:37:55.328009    4578 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load": (1.454885375s)
	I0828 10:37:55.328040    4578 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0828 10:37:55.328052    4578 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0828 10:37:55.341531    4578 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0828 10:37:55.341587    4578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0828 10:37:55.390561    4578 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0828 10:37:55.391051    4578 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0828 10:37:55.391087    4578 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0828 10:37:55.391191    4578 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0828 10:37:55.394833    4578 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0828 10:37:55.394865    4578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0828 10:37:55.404352    4578 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0828 10:37:55.404366    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0828 10:37:55.438407    4578 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0828 10:37:55.438430    4578 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0828 10:37:55.438436    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0828 10:37:55.671813    4578 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0828 10:37:55.671855    4578 cache_images.go:92] duration metric: took 3.090556458s to LoadCachedImages
	W0828 10:37:55.671891    4578 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0828 10:37:55.671896    4578 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0828 10:37:55.671962    4578 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-717000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-717000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 10:37:55.672033    4578 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0828 10:37:55.685164    4578 cni.go:84] Creating CNI manager for ""
	I0828 10:37:55.685175    4578 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:37:55.685181    4578 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 10:37:55.685189    4578 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-717000 NodeName:running-upgrade-717000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 10:37:55.685248    4578 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-717000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 10:37:55.685299    4578 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0828 10:37:55.688812    4578 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 10:37:55.688842    4578 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 10:37:55.691637    4578 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0828 10:37:55.696866    4578 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 10:37:55.702106    4578 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0828 10:37:55.707473    4578 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0828 10:37:55.708961    4578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:37:55.793072    4578 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 10:37:55.798259    4578 certs.go:68] Setting up /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/running-upgrade-717000 for IP: 10.0.2.15
	I0828 10:37:55.798265    4578 certs.go:194] generating shared ca certs ...
	I0828 10:37:55.798273    4578 certs.go:226] acquiring lock for ca certs: {Name:mkf861e7f19b199967d33246b8c25f60e0670f76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:37:55.798414    4578 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.key
	I0828 10:37:55.798453    4578 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/proxy-client-ca.key
	I0828 10:37:55.798457    4578 certs.go:256] generating profile certs ...
	I0828 10:37:55.798514    4578 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/running-upgrade-717000/client.key
	I0828 10:37:55.798529    4578 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/running-upgrade-717000/apiserver.key.df101fe9
	I0828 10:37:55.798537    4578 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/running-upgrade-717000/apiserver.crt.df101fe9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0828 10:37:55.841282    4578 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/running-upgrade-717000/apiserver.crt.df101fe9 ...
	I0828 10:37:55.841287    4578 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/running-upgrade-717000/apiserver.crt.df101fe9: {Name:mk100d51fa3f0f5f9a5055933a6440e2f4c24d48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:37:55.841550    4578 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/running-upgrade-717000/apiserver.key.df101fe9 ...
	I0828 10:37:55.841554    4578 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/running-upgrade-717000/apiserver.key.df101fe9: {Name:mkdb92e1418058beac7245042e12a4e66f1bd032 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:37:55.841690    4578 certs.go:381] copying /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/running-upgrade-717000/apiserver.crt.df101fe9 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/running-upgrade-717000/apiserver.crt
	I0828 10:37:55.841855    4578 certs.go:385] copying /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/running-upgrade-717000/apiserver.key.df101fe9 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/running-upgrade-717000/apiserver.key
	I0828 10:37:55.841993    4578 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/running-upgrade-717000/proxy-client.key
	I0828 10:37:55.842113    4578 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/1678.pem (1338 bytes)
	W0828 10:37:55.842136    4578 certs.go:480] ignoring /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/1678_empty.pem, impossibly tiny 0 bytes
	I0828 10:37:55.842140    4578 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 10:37:55.842159    4578 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem (1078 bytes)
	I0828 10:37:55.842177    4578 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem (1123 bytes)
	I0828 10:37:55.842196    4578 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/key.pem (1679 bytes)
	I0828 10:37:55.842237    4578 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/files/etc/ssl/certs/16782.pem (1708 bytes)
	I0828 10:37:55.842532    4578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 10:37:55.849473    4578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 10:37:55.856726    4578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 10:37:55.864318    4578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 10:37:55.871817    4578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/running-upgrade-717000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0828 10:37:55.878898    4578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/running-upgrade-717000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 10:37:55.885523    4578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/running-upgrade-717000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 10:37:55.892625    4578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/running-upgrade-717000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 10:37:55.900104    4578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/1678.pem --> /usr/share/ca-certificates/1678.pem (1338 bytes)
	I0828 10:37:55.907014    4578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/files/etc/ssl/certs/16782.pem --> /usr/share/ca-certificates/16782.pem (1708 bytes)
	I0828 10:37:55.913824    4578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 10:37:55.921321    4578 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 10:37:55.926459    4578 ssh_runner.go:195] Run: openssl version
	I0828 10:37:55.928359    4578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1678.pem && ln -fs /usr/share/ca-certificates/1678.pem /etc/ssl/certs/1678.pem"
	I0828 10:37:55.931714    4578 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1678.pem
	I0828 10:37:55.933207    4578 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:06 /usr/share/ca-certificates/1678.pem
	I0828 10:37:55.933231    4578 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1678.pem
	I0828 10:37:55.935252    4578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1678.pem /etc/ssl/certs/51391683.0"
	I0828 10:37:55.938027    4578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16782.pem && ln -fs /usr/share/ca-certificates/16782.pem /etc/ssl/certs/16782.pem"
	I0828 10:37:55.941258    4578 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16782.pem
	I0828 10:37:55.942704    4578 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:06 /usr/share/ca-certificates/16782.pem
	I0828 10:37:55.942726    4578 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16782.pem
	I0828 10:37:55.944536    4578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16782.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 10:37:55.947882    4578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 10:37:55.951389    4578 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 10:37:55.952928    4578 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:51 /usr/share/ca-certificates/minikubeCA.pem
	I0828 10:37:55.952946    4578 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 10:37:55.954846    4578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 10:37:55.957438    4578 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 10:37:55.958948    4578 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 10:37:55.960659    4578 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 10:37:55.962333    4578 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 10:37:55.963966    4578 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 10:37:55.965981    4578 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 10:37:55.967753    4578 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 10:37:55.969489    4578 kubeadm.go:392] StartCluster: {Name:running-upgrade-717000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50293 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-717000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0828 10:37:55.969548    4578 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0828 10:37:55.980031    4578 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 10:37:55.983538    4578 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 10:37:55.983542    4578 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 10:37:55.983561    4578 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 10:37:55.986155    4578 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 10:37:55.986424    4578 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-717000" does not appear in /Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:37:55.986470    4578 kubeconfig.go:62] /Users/jenkins/minikube-integration/19529-1176/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-717000" cluster setting kubeconfig missing "running-upgrade-717000" context setting]
	I0828 10:37:55.986602    4578 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/kubeconfig: {Name:mke8b729c65a2ae9e4d9042dc78e2127479f8609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:37:55.987723    4578 kapi.go:59] client config for running-upgrade-717000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/running-upgrade-717000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/running-upgrade-717000/client.key", CAFile:"/Users/jenkins/minikube-integration/19529-1176/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104683eb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0828 10:37:55.988036    4578 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 10:37:55.991099    4578 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-717000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0828 10:37:55.991105    4578 kubeadm.go:1160] stopping kube-system containers ...
	I0828 10:37:55.991144    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0828 10:37:56.001706    4578 docker.go:483] Stopping containers: [47cc7d261c73 a24ec86f8a1c 344d6faf3784 67365df3cec1 ea763b575572 e931fd3528ca 52b00da325a7 3374060fee0f 51f026cb47e5 5c2f532cabab]
	I0828 10:37:56.001769    4578 ssh_runner.go:195] Run: docker stop 47cc7d261c73 a24ec86f8a1c 344d6faf3784 67365df3cec1 ea763b575572 e931fd3528ca 52b00da325a7 3374060fee0f 51f026cb47e5 5c2f532cabab
	I0828 10:37:56.013025    4578 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 10:37:56.123692    4578 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 10:37:56.128638    4578 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Aug 28 17:37 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Aug 28 17:37 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug 28 17:37 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Aug 28 17:37 /etc/kubernetes/scheduler.conf
	
	I0828 10:37:56.128673    4578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/admin.conf
	I0828 10:37:56.133431    4578 kubeadm.go:163] "https://control-plane.minikube.internal:50293" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0828 10:37:56.133464    4578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 10:37:56.137422    4578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/kubelet.conf
	I0828 10:37:56.140844    4578 kubeadm.go:163] "https://control-plane.minikube.internal:50293" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0828 10:37:56.140875    4578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 10:37:56.144511    4578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/controller-manager.conf
	I0828 10:37:56.147725    4578 kubeadm.go:163] "https://control-plane.minikube.internal:50293" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0828 10:37:56.147746    4578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 10:37:56.150674    4578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/scheduler.conf
	I0828 10:37:56.154178    4578 kubeadm.go:163] "https://control-plane.minikube.internal:50293" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0828 10:37:56.154205    4578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 10:37:56.156761    4578 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 10:37:56.159977    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 10:37:56.181699    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 10:37:56.605271    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 10:37:56.834338    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 10:37:56.883757    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 10:37:56.907098    4578 api_server.go:52] waiting for apiserver process to appear ...
	I0828 10:37:56.907178    4578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 10:37:57.409214    4578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 10:37:57.909232    4578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 10:37:58.409285    4578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 10:37:58.417554    4578 api_server.go:72] duration metric: took 1.510517s to wait for apiserver process to appear ...
	I0828 10:37:58.417569    4578 api_server.go:88] waiting for apiserver healthz status ...
	I0828 10:37:58.417584    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:38:03.419635    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:38:03.419732    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:38:08.420326    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:38:08.420433    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:38:13.421355    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:38:13.421416    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:38:18.422492    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:38:18.422555    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:38:23.423902    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:38:23.423942    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:38:28.425464    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:38:28.425551    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:38:33.426883    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:38:33.426975    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:38:38.429400    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:38:38.429480    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:38:43.431937    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:38:43.432012    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:38:48.433845    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:38:48.433943    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:38:53.436441    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:38:53.436499    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:38:58.438836    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:38:58.439293    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:38:58.484549    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:38:58.484686    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:38:58.504786    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:38:58.504886    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:38:58.518739    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:38:58.518804    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:38:58.530577    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:38:58.530639    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:38:58.541382    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:38:58.541441    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:38:58.552275    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:38:58.552336    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:38:58.562590    4578 logs.go:276] 0 containers: []
	W0828 10:38:58.562603    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:38:58.562669    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:38:58.583039    4578 logs.go:276] 0 containers: []
	W0828 10:38:58.583050    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:38:58.583057    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:38:58.583062    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:38:58.599233    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:38:58.599247    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:38:58.616965    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:38:58.616975    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:38:58.630103    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:38:58.630119    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:38:58.668790    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:38:58.668800    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:38:58.686711    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:38:58.686722    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:38:58.697794    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:38:58.697804    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:38:58.702261    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:38:58.702269    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:38:58.715907    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:38:58.715920    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:38:58.728411    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:38:58.728421    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:38:58.754633    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:38:58.754640    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:38:58.777521    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:38:58.777534    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:38:58.793896    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:38:58.793906    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:38:58.805416    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:38:58.805425    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:38:58.874664    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:38:58.874674    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:39:01.391418    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:39:06.394098    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:39:06.394555    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:39:06.437253    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:39:06.437413    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:39:06.459099    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:39:06.459216    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:39:06.476607    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:39:06.476685    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:39:06.490966    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:39:06.491032    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:39:06.501436    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:39:06.501499    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:39:06.512153    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:39:06.512216    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:39:06.522274    4578 logs.go:276] 0 containers: []
	W0828 10:39:06.522283    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:39:06.522346    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:39:06.533027    4578 logs.go:276] 0 containers: []
	W0828 10:39:06.533037    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:39:06.533046    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:39:06.533052    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:39:06.569871    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:39:06.569878    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:39:06.605936    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:39:06.605947    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:39:06.628491    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:39:06.628504    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:39:06.642597    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:39:06.642607    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:39:06.646863    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:39:06.646871    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:39:06.661561    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:39:06.661571    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:39:06.687597    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:39:06.687607    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:39:06.701829    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:39:06.701842    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:39:06.717012    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:39:06.717023    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:39:06.731849    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:39:06.731859    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:39:06.748113    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:39:06.748125    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:39:06.759692    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:39:06.759706    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:39:06.777260    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:39:06.777271    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:39:06.789721    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:39:06.789730    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:39:09.303533    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:39:14.305860    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:39:14.306256    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:39:14.350578    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:39:14.350743    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:39:14.373837    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:39:14.373926    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:39:14.388157    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:39:14.388222    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:39:14.400454    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:39:14.400519    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:39:14.410723    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:39:14.410794    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:39:14.421336    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:39:14.421396    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:39:14.431377    4578 logs.go:276] 0 containers: []
	W0828 10:39:14.431391    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:39:14.431438    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:39:14.441422    4578 logs.go:276] 0 containers: []
	W0828 10:39:14.441435    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:39:14.441445    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:39:14.441451    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:39:14.452674    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:39:14.452685    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:39:14.468963    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:39:14.468971    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:39:14.486700    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:39:14.486711    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:39:14.511008    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:39:14.511016    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:39:14.522411    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:39:14.522420    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:39:14.557857    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:39:14.557871    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:39:14.574776    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:39:14.574789    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:39:14.591668    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:39:14.591688    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:39:14.603000    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:39:14.603010    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:39:14.615751    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:39:14.615762    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:39:14.629821    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:39:14.629833    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:39:14.645246    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:39:14.645256    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:39:14.683238    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:39:14.683249    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:39:14.687413    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:39:14.687420    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:39:17.211892    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:39:22.214517    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:39:22.214737    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:39:22.232452    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:39:22.232545    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:39:22.246898    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:39:22.246973    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:39:22.258183    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:39:22.258245    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:39:22.268516    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:39:22.268579    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:39:22.278812    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:39:22.278871    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:39:22.289326    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:39:22.289400    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:39:22.299250    4578 logs.go:276] 0 containers: []
	W0828 10:39:22.299261    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:39:22.299310    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:39:22.309364    4578 logs.go:276] 0 containers: []
	W0828 10:39:22.309373    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:39:22.309381    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:39:22.309386    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:39:22.346828    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:39:22.346838    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:39:22.358038    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:39:22.358051    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:39:22.377182    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:39:22.377191    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:39:22.389547    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:39:22.389561    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:39:22.393824    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:39:22.393830    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:39:22.407405    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:39:22.407416    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:39:22.424417    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:39:22.424430    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:39:22.450501    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:39:22.450508    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:39:22.461644    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:39:22.461656    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:39:22.507991    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:39:22.508005    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:39:22.525289    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:39:22.525299    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:39:22.540278    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:39:22.540291    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:39:22.557554    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:39:22.557567    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:39:22.572624    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:39:22.572637    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:39:25.086244    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:39:30.088908    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:39:30.089303    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:39:30.124939    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:39:30.125086    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:39:30.145655    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:39:30.145767    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:39:30.160897    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:39:30.160970    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:39:30.174010    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:39:30.174083    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:39:30.184736    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:39:30.184811    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:39:30.195934    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:39:30.196004    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:39:30.206532    4578 logs.go:276] 0 containers: []
	W0828 10:39:30.206542    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:39:30.206599    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:39:30.216716    4578 logs.go:276] 0 containers: []
	W0828 10:39:30.216730    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:39:30.216737    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:39:30.216742    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:39:30.229112    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:39:30.229122    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:39:30.243943    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:39:30.243954    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:39:30.261417    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:39:30.261429    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:39:30.296963    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:39:30.296973    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:39:30.310740    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:39:30.310751    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:39:30.321764    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:39:30.321775    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:39:30.337660    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:39:30.337670    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:39:30.371979    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:39:30.371994    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:39:30.384196    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:39:30.384211    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:39:30.408022    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:39:30.408029    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:39:30.421149    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:39:30.421162    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:39:30.425430    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:39:30.425436    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:39:30.444917    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:39:30.444926    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:39:30.458641    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:39:30.458650    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:39:32.974782    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:39:37.977356    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:39:37.977817    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:39:38.018591    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:39:38.018724    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:39:38.041402    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:39:38.041511    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:39:38.056772    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:39:38.056846    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:39:38.069333    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:39:38.069403    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:39:38.080670    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:39:38.080728    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:39:38.091472    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:39:38.091528    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:39:38.101704    4578 logs.go:276] 0 containers: []
	W0828 10:39:38.101715    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:39:38.101768    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:39:38.112518    4578 logs.go:276] 0 containers: []
	W0828 10:39:38.112529    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:39:38.112537    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:39:38.112542    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:39:38.127002    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:39:38.127013    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:39:38.140918    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:39:38.140929    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:39:38.153802    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:39:38.153813    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:39:38.166818    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:39:38.166830    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:39:38.171489    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:39:38.171497    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:39:38.182641    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:39:38.182651    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:39:38.194436    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:39:38.194450    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:39:38.236302    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:39:38.236316    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:39:38.257067    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:39:38.257077    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:39:38.271489    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:39:38.271501    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:39:38.287897    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:39:38.287908    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:39:38.313844    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:39:38.313851    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:39:38.349196    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:39:38.349206    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:39:38.364434    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:39:38.364447    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:39:40.883904    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:39:45.886422    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:39:45.886748    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:39:45.922837    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:39:45.922962    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:39:45.946579    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:39:45.946682    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:39:45.963434    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:39:45.963508    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:39:45.974947    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:39:45.975010    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:39:45.985401    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:39:45.985462    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:39:45.996178    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:39:45.996261    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:39:46.006823    4578 logs.go:276] 0 containers: []
	W0828 10:39:46.006833    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:39:46.006886    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:39:46.016993    4578 logs.go:276] 0 containers: []
	W0828 10:39:46.017008    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:39:46.017016    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:39:46.017022    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:39:46.030852    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:39:46.030866    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:39:46.045316    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:39:46.045326    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:39:46.070498    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:39:46.070507    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:39:46.088005    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:39:46.088019    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:39:46.108213    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:39:46.108226    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:39:46.123962    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:39:46.123974    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:39:46.136643    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:39:46.136656    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:39:46.171266    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:39:46.171281    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:39:46.182950    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:39:46.182962    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:39:46.194975    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:39:46.194987    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:39:46.212512    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:39:46.212522    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:39:46.223935    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:39:46.223945    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:39:46.261334    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:39:46.261343    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:39:46.265408    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:39:46.265416    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:39:48.783198    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:39:53.785803    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:39:53.786202    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:39:53.824918    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:39:53.825035    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:39:53.846682    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:39:53.846792    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:39:53.862156    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:39:53.862227    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:39:53.874253    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:39:53.874321    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:39:53.885400    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:39:53.885460    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:39:53.896195    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:39:53.896249    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:39:53.906566    4578 logs.go:276] 0 containers: []
	W0828 10:39:53.906575    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:39:53.906620    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:39:53.916660    4578 logs.go:276] 0 containers: []
	W0828 10:39:53.916673    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:39:53.916681    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:39:53.916686    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:39:53.930544    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:39:53.930557    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:39:53.944737    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:39:53.944750    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:39:53.956848    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:39:53.956860    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:39:53.977364    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:39:53.977374    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:39:54.002957    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:39:54.002967    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:39:54.022861    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:39:54.022872    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:39:54.035476    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:39:54.035486    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:39:54.071090    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:39:54.071097    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:39:54.075041    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:39:54.075047    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:39:54.089199    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:39:54.089209    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:39:54.100510    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:39:54.100522    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:39:54.136415    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:39:54.136431    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:39:54.156501    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:39:54.156514    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:39:54.173430    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:39:54.173446    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:39:56.690762    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:40:01.693327    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:40:01.693748    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:40:01.732013    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:40:01.732145    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:40:01.760094    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:40:01.760183    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:40:01.774026    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:40:01.774097    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:40:01.786139    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:40:01.786210    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:40:01.796824    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:40:01.796885    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:40:01.807524    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:40:01.807587    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:40:01.817591    4578 logs.go:276] 0 containers: []
	W0828 10:40:01.817604    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:40:01.817656    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:40:01.827489    4578 logs.go:276] 0 containers: []
	W0828 10:40:01.827498    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:40:01.827506    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:40:01.827512    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:40:01.862834    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:40:01.862850    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:40:01.882669    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:40:01.882680    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:40:01.894199    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:40:01.894211    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:40:01.911451    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:40:01.911461    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:40:01.924567    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:40:01.924579    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:40:01.935879    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:40:01.935894    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:40:01.940543    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:40:01.940549    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:40:01.954548    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:40:01.954557    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:40:01.965318    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:40:01.965326    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:40:01.983585    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:40:01.983596    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:40:01.997845    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:40:01.997859    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:40:02.012709    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:40:02.012721    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:40:02.027571    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:40:02.027581    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:40:02.051790    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:40:02.051800    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:40:04.588766    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:40:09.589396    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:40:09.589845    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:40:09.635911    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:40:09.636037    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:40:09.660872    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:40:09.660958    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:40:09.674517    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:40:09.674585    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:40:09.685782    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:40:09.685856    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:40:09.696290    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:40:09.696357    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:40:09.710858    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:40:09.710922    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:40:09.720808    4578 logs.go:276] 0 containers: []
	W0828 10:40:09.720819    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:40:09.720867    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:40:09.731279    4578 logs.go:276] 0 containers: []
	W0828 10:40:09.731288    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:40:09.731296    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:40:09.731300    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:40:09.766544    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:40:09.766558    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:40:09.781140    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:40:09.781152    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:40:09.792743    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:40:09.792754    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:40:09.811178    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:40:09.811191    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:40:09.815804    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:40:09.815814    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:40:09.833947    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:40:09.833956    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:40:09.845478    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:40:09.845488    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:40:09.859604    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:40:09.859615    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:40:09.878827    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:40:09.878840    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:40:09.904554    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:40:09.904564    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:40:09.941594    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:40:09.941602    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:40:09.958078    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:40:09.958092    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:40:09.978397    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:40:09.978408    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:40:09.994497    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:40:09.994510    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:40:12.516348    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:40:17.518458    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:40:17.518592    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:40:17.530483    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:40:17.530555    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:40:17.541311    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:40:17.541379    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:40:17.552089    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:40:17.552155    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:40:17.563184    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:40:17.563260    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:40:17.574055    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:40:17.574121    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:40:17.584453    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:40:17.584512    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:40:17.595162    4578 logs.go:276] 0 containers: []
	W0828 10:40:17.595173    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:40:17.595220    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:40:17.605824    4578 logs.go:276] 0 containers: []
	W0828 10:40:17.605836    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:40:17.605845    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:40:17.605850    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:40:17.611547    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:40:17.611555    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:40:17.635800    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:40:17.635814    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:40:17.660033    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:40:17.660041    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:40:17.696500    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:40:17.696507    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:40:17.712687    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:40:17.712703    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:40:17.736077    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:40:17.736087    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:40:17.753572    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:40:17.753582    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:40:17.766585    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:40:17.766595    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:40:17.804640    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:40:17.804657    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:40:17.819064    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:40:17.819073    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:40:17.836322    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:40:17.836332    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:40:17.848651    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:40:17.848662    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:40:17.860685    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:40:17.860697    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:40:17.883137    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:40:17.883147    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:40:20.396860    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:40:25.399086    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:40:25.399427    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:40:25.438235    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:40:25.438373    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:40:25.460153    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:40:25.460261    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:40:25.478380    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:40:25.478458    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:40:25.490231    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:40:25.490303    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:40:25.500944    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:40:25.501011    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:40:25.512644    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:40:25.512711    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:40:25.524045    4578 logs.go:276] 0 containers: []
	W0828 10:40:25.524058    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:40:25.524130    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:40:25.535111    4578 logs.go:276] 0 containers: []
	W0828 10:40:25.535121    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:40:25.535128    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:40:25.535133    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:40:25.549019    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:40:25.549032    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:40:25.563360    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:40:25.563373    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:40:25.574752    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:40:25.574763    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:40:25.578995    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:40:25.579004    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:40:25.617914    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:40:25.617928    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:40:25.634042    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:40:25.634055    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:40:25.645931    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:40:25.645944    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:40:25.659876    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:40:25.659886    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:40:25.670935    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:40:25.670947    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:40:25.688827    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:40:25.688836    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:40:25.704487    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:40:25.704499    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:40:25.716473    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:40:25.716489    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:40:25.741191    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:40:25.741201    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:40:25.778405    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:40:25.778416    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:40:28.312024    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:40:33.314107    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:40:33.314240    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:40:33.325474    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:40:33.325548    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:40:33.336156    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:40:33.336230    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:40:33.347823    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:40:33.347903    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:40:33.365097    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:40:33.365183    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:40:33.375456    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:40:33.375529    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:40:33.386345    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:40:33.386410    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:40:33.396723    4578 logs.go:276] 0 containers: []
	W0828 10:40:33.396734    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:40:33.396801    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:40:33.406960    4578 logs.go:276] 0 containers: []
	W0828 10:40:33.406974    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:40:33.406984    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:40:33.406989    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:40:33.446448    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:40:33.446464    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:40:33.464647    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:40:33.464656    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:40:33.479713    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:40:33.479724    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:40:33.493581    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:40:33.493591    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:40:33.507969    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:40:33.507980    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:40:33.519774    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:40:33.519785    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:40:33.535711    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:40:33.535721    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:40:33.551138    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:40:33.551148    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:40:33.555949    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:40:33.555955    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:40:33.591733    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:40:33.591744    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:40:33.611892    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:40:33.611904    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:40:33.636566    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:40:33.636574    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:40:33.648237    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:40:33.648251    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:40:33.661261    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:40:33.661271    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:40:36.174758    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:40:41.176972    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:40:41.177154    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:40:41.193438    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:40:41.193517    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:40:41.205944    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:40:41.206016    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:40:41.216982    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:40:41.217049    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:40:41.227317    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:40:41.227394    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:40:41.237972    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:40:41.238045    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:40:41.250465    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:40:41.250531    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:40:41.261546    4578 logs.go:276] 0 containers: []
	W0828 10:40:41.261558    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:40:41.261617    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:40:41.274975    4578 logs.go:276] 0 containers: []
	W0828 10:40:41.274991    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:40:41.274999    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:40:41.275004    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:40:41.290783    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:40:41.290793    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:40:41.326470    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:40:41.326480    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:40:41.362058    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:40:41.362072    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:40:41.376665    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:40:41.376675    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:40:41.388110    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:40:41.388119    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:40:41.406590    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:40:41.406599    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:40:41.420447    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:40:41.420457    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:40:41.432233    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:40:41.432246    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:40:41.447505    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:40:41.447519    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:40:41.459663    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:40:41.459673    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:40:41.474918    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:40:41.474931    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:40:41.498624    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:40:41.498634    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:40:41.503398    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:40:41.503407    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:40:41.522696    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:40:41.522707    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:40:44.049029    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:40:49.051095    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:40:49.051251    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:40:49.069311    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:40:49.069385    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:40:49.083204    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:40:49.083267    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:40:49.100651    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:40:49.100721    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:40:49.111057    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:40:49.111141    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:40:49.121997    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:40:49.122061    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:40:49.138173    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:40:49.138230    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:40:49.149105    4578 logs.go:276] 0 containers: []
	W0828 10:40:49.149115    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:40:49.149170    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:40:49.159872    4578 logs.go:276] 0 containers: []
	W0828 10:40:49.159883    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:40:49.159892    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:40:49.159898    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:40:49.171464    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:40:49.171476    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:40:49.183747    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:40:49.183757    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:40:49.195254    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:40:49.195266    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:40:49.231062    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:40:49.231071    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:40:49.235689    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:40:49.235696    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:40:49.252690    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:40:49.252700    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:40:49.268562    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:40:49.268572    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:40:49.291890    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:40:49.291900    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:40:49.303063    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:40:49.303075    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:40:49.318351    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:40:49.318361    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:40:49.353613    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:40:49.353623    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:40:49.367776    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:40:49.367785    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:40:49.387778    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:40:49.387787    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:40:49.403496    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:40:49.403506    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:40:51.929942    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:40:56.932119    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:40:56.932294    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:40:56.946074    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:40:56.946157    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:40:56.957864    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:40:56.957933    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:40:56.971549    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:40:56.971619    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:40:56.982551    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:40:56.982622    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:40:56.996412    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:40:56.996479    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:40:57.006756    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:40:57.006822    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:40:57.018923    4578 logs.go:276] 0 containers: []
	W0828 10:40:57.018935    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:40:57.018993    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:40:57.029534    4578 logs.go:276] 0 containers: []
	W0828 10:40:57.029552    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:40:57.029560    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:40:57.029566    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:40:57.045771    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:40:57.045780    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:40:57.063886    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:40:57.063898    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:40:57.104295    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:40:57.104307    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:40:57.118544    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:40:57.118556    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:40:57.130453    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:40:57.130466    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:40:57.141984    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:40:57.141995    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:40:57.154781    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:40:57.154793    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:40:57.168977    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:40:57.168989    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:40:57.173428    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:40:57.173436    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:40:57.193503    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:40:57.193513    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:40:57.218447    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:40:57.218456    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:40:57.230382    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:40:57.230396    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:40:57.269043    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:40:57.269062    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:40:57.284804    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:40:57.284815    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:40:59.809902    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:04.812581    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:04.812967    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:41:04.852058    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:41:04.852199    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:41:04.873610    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:41:04.873701    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:41:04.888958    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:41:04.889038    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:41:04.901621    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:41:04.901689    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:41:04.918834    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:41:04.918907    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:41:04.930005    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:41:04.930065    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:41:04.940492    4578 logs.go:276] 0 containers: []
	W0828 10:41:04.940503    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:41:04.940561    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:41:04.950591    4578 logs.go:276] 0 containers: []
	W0828 10:41:04.950607    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:41:04.950614    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:41:04.950618    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:41:04.969187    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:41:04.969197    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:41:04.986314    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:41:04.986324    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:41:04.997830    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:41:04.997843    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:41:05.034898    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:41:05.034911    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:41:05.050979    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:41:05.050992    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:41:05.066242    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:41:05.066255    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:41:05.091645    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:41:05.091656    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:41:05.103576    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:41:05.103586    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:41:05.107818    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:41:05.107826    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:41:05.122003    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:41:05.122012    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:41:05.135604    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:41:05.135614    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:41:05.152669    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:41:05.152681    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:41:05.188908    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:41:05.188917    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:41:05.202434    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:41:05.202443    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:41:07.724623    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:12.726700    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:12.726813    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:41:12.737776    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:41:12.737852    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:41:12.748674    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:41:12.748751    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:41:12.775782    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:41:12.775857    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:41:12.788243    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:41:12.788311    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:41:12.804842    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:41:12.804907    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:41:12.818474    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:41:12.818545    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:41:12.829919    4578 logs.go:276] 0 containers: []
	W0828 10:41:12.829932    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:41:12.829997    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:41:12.840775    4578 logs.go:276] 0 containers: []
	W0828 10:41:12.840788    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:41:12.840796    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:41:12.840801    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:41:12.853799    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:41:12.853810    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:41:12.871980    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:41:12.871996    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:41:12.896357    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:41:12.896368    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:41:12.901032    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:41:12.901044    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:41:12.915788    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:41:12.915799    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:41:12.927981    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:41:12.927996    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:41:12.945491    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:41:12.945503    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:41:12.985430    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:41:12.985446    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:41:13.000644    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:41:13.000656    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:41:13.015651    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:41:13.015662    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:41:13.027275    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:41:13.027289    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:41:13.061101    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:41:13.061115    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:41:13.082111    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:41:13.082124    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:41:13.095926    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:41:13.095939    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:41:15.610560    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:20.612756    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:20.613052    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:41:20.641676    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:41:20.641796    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:41:20.660408    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:41:20.660484    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:41:20.673400    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:41:20.673463    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:41:20.685388    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:41:20.685461    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:41:20.695866    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:41:20.695934    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:41:20.706214    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:41:20.706284    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:41:20.716466    4578 logs.go:276] 0 containers: []
	W0828 10:41:20.716479    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:41:20.716537    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:41:20.726718    4578 logs.go:276] 0 containers: []
	W0828 10:41:20.726742    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:41:20.726750    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:41:20.726756    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:41:20.746371    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:41:20.746381    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:41:20.760472    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:41:20.760485    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:41:20.771494    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:41:20.771503    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:41:20.775839    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:41:20.775846    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:41:20.809151    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:41:20.809165    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:41:20.825735    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:41:20.825748    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:41:20.843429    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:41:20.843440    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:41:20.880865    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:41:20.880872    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:41:20.894276    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:41:20.894287    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:41:20.907344    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:41:20.907356    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:41:20.921891    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:41:20.921901    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:41:20.941679    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:41:20.941692    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:41:20.953750    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:41:20.953763    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:41:20.976298    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:41:20.976305    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:41:23.489708    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:28.491726    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:28.491796    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:41:28.503826    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:41:28.503879    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:41:28.515288    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:41:28.515387    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:41:28.528213    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:41:28.528274    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:41:28.543883    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:41:28.543926    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:41:28.555794    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:41:28.555851    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:41:28.567304    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:41:28.567353    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:41:28.579056    4578 logs.go:276] 0 containers: []
	W0828 10:41:28.579071    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:41:28.579139    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:41:28.590234    4578 logs.go:276] 0 containers: []
	W0828 10:41:28.590247    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:41:28.590259    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:41:28.590265    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:41:28.629714    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:41:28.629725    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:41:28.651092    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:41:28.651109    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:41:28.668203    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:41:28.668224    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:41:28.682393    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:41:28.682406    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:41:28.721072    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:41:28.721086    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:41:28.739295    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:41:28.739305    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:41:28.751375    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:41:28.751388    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:41:28.765100    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:41:28.765114    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:41:28.779169    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:41:28.779180    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:41:28.789990    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:41:28.790001    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:41:28.804856    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:41:28.804866    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:41:28.816797    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:41:28.816810    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:41:28.834860    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:41:28.834871    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:41:28.858178    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:41:28.858189    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:41:31.362554    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:36.363021    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:36.363131    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:41:36.373937    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:41:36.374005    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:41:36.384347    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:41:36.384421    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:41:36.395535    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:41:36.395605    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:41:36.406094    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:41:36.406157    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:41:36.416713    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:41:36.416779    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:41:36.427373    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:41:36.427447    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:41:36.441161    4578 logs.go:276] 0 containers: []
	W0828 10:41:36.441173    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:41:36.441229    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:41:36.453358    4578 logs.go:276] 0 containers: []
	W0828 10:41:36.453374    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:41:36.453382    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:41:36.453389    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:41:36.490236    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:41:36.490248    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:41:36.524783    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:41:36.524797    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:41:36.542086    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:41:36.542098    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:41:36.561753    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:41:36.561764    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:41:36.575961    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:41:36.575972    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:41:36.591733    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:41:36.591744    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:41:36.607912    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:41:36.607921    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:41:36.630371    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:41:36.630380    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:41:36.634441    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:41:36.634450    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:41:36.646720    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:41:36.646730    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:41:36.660468    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:41:36.660478    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:41:36.671772    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:41:36.671783    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:41:36.687004    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:41:36.687015    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:41:36.699669    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:41:36.699679    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:41:39.213114    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:44.215693    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:44.216137    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:41:44.251796    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:41:44.251943    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:41:44.273705    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:41:44.273807    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:41:44.290956    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:41:44.291040    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:41:44.304276    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:41:44.304352    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:41:44.314564    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:41:44.314633    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:41:44.325368    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:41:44.325439    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:41:44.336961    4578 logs.go:276] 0 containers: []
	W0828 10:41:44.336981    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:41:44.337043    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:41:44.347313    4578 logs.go:276] 0 containers: []
	W0828 10:41:44.347324    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:41:44.347333    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:41:44.347340    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:41:44.361033    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:41:44.361051    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:41:44.384399    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:41:44.384409    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:41:44.419840    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:41:44.419846    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:41:44.433467    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:41:44.433482    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:41:44.448305    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:41:44.448319    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:41:44.465752    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:41:44.465766    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:41:44.478156    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:41:44.478167    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:41:44.490567    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:41:44.490582    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:41:44.502439    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:41:44.502452    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:41:44.506739    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:41:44.506747    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:41:44.540433    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:41:44.540449    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:41:44.561331    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:41:44.561345    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:41:44.575646    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:41:44.575658    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:41:44.591540    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:41:44.591551    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:41:47.111880    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:52.114334    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:52.114455    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:41:52.125418    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:41:52.125489    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:41:52.136538    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:41:52.136650    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:41:52.148622    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:41:52.148697    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:41:52.161141    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:41:52.161212    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:41:52.173088    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:41:52.173159    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:41:52.185530    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:41:52.185618    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:41:52.196738    4578 logs.go:276] 0 containers: []
	W0828 10:41:52.196750    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:41:52.196810    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:41:52.208291    4578 logs.go:276] 0 containers: []
	W0828 10:41:52.208303    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:41:52.208312    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:41:52.208318    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:41:52.227092    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:41:52.227112    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:41:52.240654    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:41:52.240667    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:41:52.262115    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:41:52.262128    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:41:52.278824    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:41:52.278849    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:41:52.297604    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:41:52.297616    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:41:52.323062    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:41:52.323082    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:41:52.366297    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:41:52.366319    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:41:52.381872    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:41:52.381885    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:41:52.386685    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:41:52.386704    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:41:52.425015    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:41:52.425026    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:41:52.439231    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:41:52.439243    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:41:52.452158    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:41:52.452174    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:41:52.468810    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:41:52.468821    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:41:52.481912    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:41:52.481924    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:41:54.995597    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:59.997857    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:59.997997    4578 kubeadm.go:597] duration metric: took 4m4.023250167s to restartPrimaryControlPlane
	W0828 10:41:59.998127    4578 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0828 10:41:59.998182    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0828 10:42:00.934267    4578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 10:42:00.939202    4578 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 10:42:00.941903    4578 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 10:42:00.944878    4578 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 10:42:00.944884    4578 kubeadm.go:157] found existing configuration files:
	
	I0828 10:42:00.944908    4578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/admin.conf
	I0828 10:42:00.947471    4578 kubeadm.go:163] "https://control-plane.minikube.internal:50293" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 10:42:00.947495    4578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 10:42:00.949909    4578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/kubelet.conf
	I0828 10:42:00.952795    4578 kubeadm.go:163] "https://control-plane.minikube.internal:50293" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 10:42:00.952816    4578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 10:42:00.956253    4578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/controller-manager.conf
	I0828 10:42:00.958858    4578 kubeadm.go:163] "https://control-plane.minikube.internal:50293" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 10:42:00.958880    4578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 10:42:00.961345    4578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/scheduler.conf
	I0828 10:42:00.964331    4578 kubeadm.go:163] "https://control-plane.minikube.internal:50293" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 10:42:00.964355    4578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 10:42:00.967091    4578 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 10:42:00.983587    4578 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0828 10:42:00.983617    4578 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 10:42:01.031333    4578 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 10:42:01.031391    4578 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 10:42:01.031467    4578 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0828 10:42:01.081066    4578 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 10:42:01.085336    4578 out.go:235]   - Generating certificates and keys ...
	I0828 10:42:01.085374    4578 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 10:42:01.085411    4578 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 10:42:01.085450    4578 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 10:42:01.085490    4578 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 10:42:01.085527    4578 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 10:42:01.085564    4578 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 10:42:01.085600    4578 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 10:42:01.085630    4578 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 10:42:01.085667    4578 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 10:42:01.085704    4578 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 10:42:01.085726    4578 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 10:42:01.085754    4578 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 10:42:01.168636    4578 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 10:42:01.267805    4578 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 10:42:01.412586    4578 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 10:42:01.672865    4578 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 10:42:01.703386    4578 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 10:42:01.703689    4578 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 10:42:01.703728    4578 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 10:42:01.790637    4578 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 10:42:01.793569    4578 out.go:235]   - Booting up control plane ...
	I0828 10:42:01.793616    4578 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 10:42:01.793657    4578 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 10:42:01.793688    4578 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 10:42:01.793748    4578 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 10:42:01.793830    4578 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0828 10:42:06.294727    4578 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502091 seconds
	I0828 10:42:06.294843    4578 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0828 10:42:06.300804    4578 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0828 10:42:06.818976    4578 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0828 10:42:06.819358    4578 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-717000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0828 10:42:07.324945    4578 kubeadm.go:310] [bootstrap-token] Using token: gikppl.stuh2yrx4blizjqe
	I0828 10:42:07.330907    4578 out.go:235]   - Configuring RBAC rules ...
	I0828 10:42:07.330988    4578 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0828 10:42:07.331046    4578 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0828 10:42:07.336375    4578 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0828 10:42:07.337539    4578 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0828 10:42:07.338614    4578 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0828 10:42:07.339590    4578 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0828 10:42:07.343355    4578 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0828 10:42:07.517083    4578 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0828 10:42:07.729450    4578 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0828 10:42:07.729915    4578 kubeadm.go:310] 
	I0828 10:42:07.729947    4578 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0828 10:42:07.729954    4578 kubeadm.go:310] 
	I0828 10:42:07.729994    4578 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0828 10:42:07.729997    4578 kubeadm.go:310] 
	I0828 10:42:07.730009    4578 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0828 10:42:07.730041    4578 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0828 10:42:07.730194    4578 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0828 10:42:07.730202    4578 kubeadm.go:310] 
	I0828 10:42:07.730234    4578 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0828 10:42:07.730237    4578 kubeadm.go:310] 
	I0828 10:42:07.730259    4578 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0828 10:42:07.730262    4578 kubeadm.go:310] 
	I0828 10:42:07.730311    4578 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0828 10:42:07.730347    4578 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0828 10:42:07.730408    4578 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0828 10:42:07.730415    4578 kubeadm.go:310] 
	I0828 10:42:07.730461    4578 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0828 10:42:07.730505    4578 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0828 10:42:07.730509    4578 kubeadm.go:310] 
	I0828 10:42:07.730553    4578 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token gikppl.stuh2yrx4blizjqe \
	I0828 10:42:07.730606    4578 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5b3c4c1f8e59fd4c25ce08db6b17ec7ac98ea4455ff93445c7a91221249d86a1 \
	I0828 10:42:07.730619    4578 kubeadm.go:310] 	--control-plane 
	I0828 10:42:07.730624    4578 kubeadm.go:310] 
	I0828 10:42:07.730665    4578 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0828 10:42:07.730667    4578 kubeadm.go:310] 
	I0828 10:42:07.730708    4578 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token gikppl.stuh2yrx4blizjqe \
	I0828 10:42:07.730773    4578 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5b3c4c1f8e59fd4c25ce08db6b17ec7ac98ea4455ff93445c7a91221249d86a1 
	I0828 10:42:07.730827    4578 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 10:42:07.730841    4578 cni.go:84] Creating CNI manager for ""
	I0828 10:42:07.730849    4578 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:42:07.733853    4578 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 10:42:07.737833    4578 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 10:42:07.740730    4578 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 10:42:07.745478    4578 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 10:42:07.745522    4578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 10:42:07.745580    4578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-717000 minikube.k8s.io/updated_at=2024_08_28T10_42_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=running-upgrade-717000 minikube.k8s.io/primary=true
	I0828 10:42:07.792270    4578 ops.go:34] apiserver oom_adj: -16
	I0828 10:42:07.792301    4578 kubeadm.go:1113] duration metric: took 46.810875ms to wait for elevateKubeSystemPrivileges
	I0828 10:42:07.794640    4578 kubeadm.go:394] duration metric: took 4m11.834239916s to StartCluster
	I0828 10:42:07.794655    4578 settings.go:142] acquiring lock: {Name:mk584f5f183a19e050e7184c0c9e70ea26430337 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:42:07.794743    4578 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:42:07.795098    4578 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/kubeconfig: {Name:mke8b729c65a2ae9e4d9042dc78e2127479f8609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:42:07.795292    4578 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:42:07.795394    4578 config.go:182] Loaded profile config "running-upgrade-717000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0828 10:42:07.795327    4578 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 10:42:07.795457    4578 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-717000"
	I0828 10:42:07.795470    4578 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-717000"
	W0828 10:42:07.795475    4578 addons.go:243] addon storage-provisioner should already be in state true
	I0828 10:42:07.795486    4578 host.go:66] Checking if "running-upgrade-717000" exists ...
	I0828 10:42:07.795498    4578 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-717000"
	I0828 10:42:07.795510    4578 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-717000"
	I0828 10:42:07.795730    4578 retry.go:31] will retry after 1.306894239s: connect: dial unix /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/running-upgrade-717000/monitor: connect: connection refused
	I0828 10:42:07.796409    4578 kapi.go:59] client config for running-upgrade-717000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/running-upgrade-717000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/running-upgrade-717000/client.key", CAFile:"/Users/jenkins/minikube-integration/19529-1176/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104683eb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0828 10:42:07.796542    4578 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-717000"
	W0828 10:42:07.796546    4578 addons.go:243] addon default-storageclass should already be in state true
	I0828 10:42:07.796554    4578 host.go:66] Checking if "running-upgrade-717000" exists ...
	I0828 10:42:07.797086    4578 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 10:42:07.797092    4578 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 10:42:07.797097    4578 sshutil.go:53] new ssh client: &{IP:localhost Port:50261 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/running-upgrade-717000/id_rsa Username:docker}
	I0828 10:42:07.799846    4578 out.go:177] * Verifying Kubernetes components...
	I0828 10:42:07.806769    4578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:42:07.894016    4578 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 10:42:07.898874    4578 api_server.go:52] waiting for apiserver process to appear ...
	I0828 10:42:07.898913    4578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 10:42:07.902809    4578 api_server.go:72] duration metric: took 107.509834ms to wait for apiserver process to appear ...
	I0828 10:42:07.902817    4578 api_server.go:88] waiting for apiserver healthz status ...
	I0828 10:42:07.902824    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:07.973423    4578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 10:42:08.264849    4578 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0828 10:42:08.264861    4578 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0828 10:42:09.111496    4578 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 10:42:09.115514    4578 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 10:42:09.115532    4578 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 10:42:09.115549    4578 sshutil.go:53] new ssh client: &{IP:localhost Port:50261 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/running-upgrade-717000/id_rsa Username:docker}
	I0828 10:42:09.176496    4578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 10:42:12.904815    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:12.904881    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:17.905177    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:17.905217    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:22.905495    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:22.905545    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:27.905988    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:27.906074    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:32.906740    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:32.906756    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:37.907696    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:37.907790    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0828 10:42:38.266341    4578 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0828 10:42:38.270118    4578 out.go:177] * Enabled addons: storage-provisioner
	I0828 10:42:38.279903    4578 addons.go:510] duration metric: took 30.485683709s for enable addons: enabled=[storage-provisioner]
	I0828 10:42:42.909606    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:42.909657    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:47.911371    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:47.911464    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:52.913974    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:52.914030    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:57.915414    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:57.915442    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:02.917577    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:02.917663    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:07.920188    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:07.920487    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:43:07.960381    4578 logs.go:276] 1 containers: [d751e569ea31]
	I0828 10:43:07.960482    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:43:07.983797    4578 logs.go:276] 1 containers: [f3ab42a808f3]
	I0828 10:43:07.983939    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:43:07.996184    4578 logs.go:276] 2 containers: [e251198522b1 f352e786668a]
	I0828 10:43:07.996258    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:43:08.007348    4578 logs.go:276] 1 containers: [d378c1964053]
	I0828 10:43:08.007414    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:43:08.018059    4578 logs.go:276] 1 containers: [927c8d8912e6]
	I0828 10:43:08.018137    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:43:08.031552    4578 logs.go:276] 1 containers: [6b81eae0040a]
	I0828 10:43:08.031631    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:43:08.044478    4578 logs.go:276] 0 containers: []
	W0828 10:43:08.044488    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:43:08.044551    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:43:08.055449    4578 logs.go:276] 1 containers: [ed2f4076ae8f]
	I0828 10:43:08.055464    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:43:08.055470    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:43:08.061065    4578 logs.go:123] Gathering logs for etcd [f3ab42a808f3] ...
	I0828 10:43:08.061072    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ab42a808f3"
	I0828 10:43:08.075321    4578 logs.go:123] Gathering logs for coredns [e251198522b1] ...
	I0828 10:43:08.075332    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e251198522b1"
	I0828 10:43:08.087424    4578 logs.go:123] Gathering logs for coredns [f352e786668a] ...
	I0828 10:43:08.087438    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f352e786668a"
	I0828 10:43:08.099293    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:43:08.099306    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:43:08.122608    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:43:08.122615    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 10:43:08.154021    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:43:08.154119    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:43:08.155402    4578 logs.go:123] Gathering logs for kube-apiserver [d751e569ea31] ...
	I0828 10:43:08.155410    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d751e569ea31"
	I0828 10:43:08.170842    4578 logs.go:123] Gathering logs for kube-scheduler [d378c1964053] ...
	I0828 10:43:08.170853    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d378c1964053"
	I0828 10:43:08.191659    4578 logs.go:123] Gathering logs for kube-proxy [927c8d8912e6] ...
	I0828 10:43:08.191670    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927c8d8912e6"
	I0828 10:43:08.203525    4578 logs.go:123] Gathering logs for kube-controller-manager [6b81eae0040a] ...
	I0828 10:43:08.203540    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b81eae0040a"
	I0828 10:43:08.221425    4578 logs.go:123] Gathering logs for storage-provisioner [ed2f4076ae8f] ...
	I0828 10:43:08.221436    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed2f4076ae8f"
	I0828 10:43:08.232802    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:43:08.232811    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:43:08.244029    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:43:08.244040    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:43:08.278669    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:43:08.278685    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 10:43:08.278712    4578 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0828 10:43:08.278717    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	  Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:43:08.278721    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	  Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:43:08.278733    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:43:08.278735    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:43:18.281712    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:23.283978    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:23.284157    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:43:23.304523    4578 logs.go:276] 1 containers: [d751e569ea31]
	I0828 10:43:23.304621    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:43:23.319612    4578 logs.go:276] 1 containers: [f3ab42a808f3]
	I0828 10:43:23.319690    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:43:23.332571    4578 logs.go:276] 2 containers: [e251198522b1 f352e786668a]
	I0828 10:43:23.332647    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:43:23.343413    4578 logs.go:276] 1 containers: [d378c1964053]
	I0828 10:43:23.343479    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:43:23.354408    4578 logs.go:276] 1 containers: [927c8d8912e6]
	I0828 10:43:23.354479    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:43:23.365048    4578 logs.go:276] 1 containers: [6b81eae0040a]
	I0828 10:43:23.365113    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:43:23.375126    4578 logs.go:276] 0 containers: []
	W0828 10:43:23.375135    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:43:23.375186    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:43:23.385807    4578 logs.go:276] 1 containers: [ed2f4076ae8f]
	I0828 10:43:23.385822    4578 logs.go:123] Gathering logs for kube-apiserver [d751e569ea31] ...
	I0828 10:43:23.385828    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d751e569ea31"
	I0828 10:43:23.400059    4578 logs.go:123] Gathering logs for etcd [f3ab42a808f3] ...
	I0828 10:43:23.400072    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ab42a808f3"
	I0828 10:43:23.413660    4578 logs.go:123] Gathering logs for coredns [f352e786668a] ...
	I0828 10:43:23.413671    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f352e786668a"
	I0828 10:43:23.425438    4578 logs.go:123] Gathering logs for kube-scheduler [d378c1964053] ...
	I0828 10:43:23.425448    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d378c1964053"
	I0828 10:43:23.440496    4578 logs.go:123] Gathering logs for kube-proxy [927c8d8912e6] ...
	I0828 10:43:23.440508    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927c8d8912e6"
	I0828 10:43:23.453135    4578 logs.go:123] Gathering logs for storage-provisioner [ed2f4076ae8f] ...
	I0828 10:43:23.453148    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed2f4076ae8f"
	I0828 10:43:23.469927    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:43:23.469936    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 10:43:23.501525    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:43:23.501625    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:43:23.502866    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:43:23.502870    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:43:23.507452    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:43:23.507460    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:43:23.542934    4578 logs.go:123] Gathering logs for coredns [e251198522b1] ...
	I0828 10:43:23.542947    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e251198522b1"
	I0828 10:43:23.554843    4578 logs.go:123] Gathering logs for kube-controller-manager [6b81eae0040a] ...
	I0828 10:43:23.554857    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b81eae0040a"
	I0828 10:43:23.576656    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:43:23.576667    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:43:23.601502    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:43:23.601514    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:43:23.612948    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:43:23.612959    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 10:43:23.612986    4578 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0828 10:43:23.612991    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	  Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:43:23.612994    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	  Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:43:23.613023    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:43:23.613026    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:43:33.616829    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:38.619179    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:38.619525    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:43:38.652869    4578 logs.go:276] 1 containers: [d751e569ea31]
	I0828 10:43:38.653001    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:43:38.671150    4578 logs.go:276] 1 containers: [f3ab42a808f3]
	I0828 10:43:38.671244    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:43:38.685240    4578 logs.go:276] 2 containers: [e251198522b1 f352e786668a]
	I0828 10:43:38.685319    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:43:38.697013    4578 logs.go:276] 1 containers: [d378c1964053]
	I0828 10:43:38.697087    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:43:38.709153    4578 logs.go:276] 1 containers: [927c8d8912e6]
	I0828 10:43:38.709221    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:43:38.719219    4578 logs.go:276] 1 containers: [6b81eae0040a]
	I0828 10:43:38.719284    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:43:38.729998    4578 logs.go:276] 0 containers: []
	W0828 10:43:38.730010    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:43:38.730076    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:43:38.740455    4578 logs.go:276] 1 containers: [ed2f4076ae8f]
	I0828 10:43:38.740469    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:43:38.740475    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 10:43:38.772680    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:43:38.772777    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:43:38.774009    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:43:38.774015    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:43:38.810471    4578 logs.go:123] Gathering logs for etcd [f3ab42a808f3] ...
	I0828 10:43:38.810484    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ab42a808f3"
	I0828 10:43:38.824990    4578 logs.go:123] Gathering logs for coredns [e251198522b1] ...
	I0828 10:43:38.825001    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e251198522b1"
	I0828 10:43:38.836687    4578 logs.go:123] Gathering logs for kube-scheduler [d378c1964053] ...
	I0828 10:43:38.836697    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d378c1964053"
	I0828 10:43:38.853293    4578 logs.go:123] Gathering logs for storage-provisioner [ed2f4076ae8f] ...
	I0828 10:43:38.853308    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed2f4076ae8f"
	I0828 10:43:38.865167    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:43:38.865178    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:43:38.877572    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:43:38.877585    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:43:38.882329    4578 logs.go:123] Gathering logs for kube-apiserver [d751e569ea31] ...
	I0828 10:43:38.882337    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d751e569ea31"
	I0828 10:43:38.905275    4578 logs.go:123] Gathering logs for coredns [f352e786668a] ...
	I0828 10:43:38.905288    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f352e786668a"
	I0828 10:43:38.916862    4578 logs.go:123] Gathering logs for kube-proxy [927c8d8912e6] ...
	I0828 10:43:38.916872    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927c8d8912e6"
	I0828 10:43:38.928554    4578 logs.go:123] Gathering logs for kube-controller-manager [6b81eae0040a] ...
	I0828 10:43:38.928564    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b81eae0040a"
	I0828 10:43:38.946609    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:43:38.946619    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:43:38.971602    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:43:38.971612    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 10:43:38.971638    4578 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0828 10:43:38.971643    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	  Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:43:38.971648    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	  Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:43:38.971672    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:43:38.971689    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:43:48.975521    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:53.978035    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:53.978246    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:43:54.002389    4578 logs.go:276] 1 containers: [d751e569ea31]
	I0828 10:43:54.002489    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:43:54.017807    4578 logs.go:276] 1 containers: [f3ab42a808f3]
	I0828 10:43:54.017886    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:43:54.030069    4578 logs.go:276] 2 containers: [e251198522b1 f352e786668a]
	I0828 10:43:54.030142    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:43:54.046707    4578 logs.go:276] 1 containers: [d378c1964053]
	I0828 10:43:54.046778    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:43:54.061967    4578 logs.go:276] 1 containers: [927c8d8912e6]
	I0828 10:43:54.062041    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:43:54.080041    4578 logs.go:276] 1 containers: [6b81eae0040a]
	I0828 10:43:54.080108    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:43:54.090371    4578 logs.go:276] 0 containers: []
	W0828 10:43:54.090382    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:43:54.090442    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:43:54.100800    4578 logs.go:276] 1 containers: [ed2f4076ae8f]
	I0828 10:43:54.100816    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:43:54.100821    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:43:54.105242    4578 logs.go:123] Gathering logs for coredns [f352e786668a] ...
	I0828 10:43:54.105251    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f352e786668a"
	I0828 10:43:54.118903    4578 logs.go:123] Gathering logs for kube-proxy [927c8d8912e6] ...
	I0828 10:43:54.118916    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927c8d8912e6"
	I0828 10:43:54.130743    4578 logs.go:123] Gathering logs for kube-controller-manager [6b81eae0040a] ...
	I0828 10:43:54.130754    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b81eae0040a"
	I0828 10:43:54.148889    4578 logs.go:123] Gathering logs for storage-provisioner [ed2f4076ae8f] ...
	I0828 10:43:54.148898    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed2f4076ae8f"
	I0828 10:43:54.160745    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:43:54.160756    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:43:54.172886    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:43:54.172899    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 10:43:54.204802    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:43:54.204899    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:43:54.206133    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:43:54.206137    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:43:54.242549    4578 logs.go:123] Gathering logs for kube-apiserver [d751e569ea31] ...
	I0828 10:43:54.242560    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d751e569ea31"
	I0828 10:43:54.257085    4578 logs.go:123] Gathering logs for etcd [f3ab42a808f3] ...
	I0828 10:43:54.257096    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ab42a808f3"
	I0828 10:43:54.274648    4578 logs.go:123] Gathering logs for coredns [e251198522b1] ...
	I0828 10:43:54.274667    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e251198522b1"
	I0828 10:43:54.289675    4578 logs.go:123] Gathering logs for kube-scheduler [d378c1964053] ...
	I0828 10:43:54.289690    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d378c1964053"
	I0828 10:43:54.305219    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:43:54.305233    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:43:54.330393    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:43:54.330402    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 10:43:54.330429    4578 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0828 10:43:54.330434    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	  Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:43:54.330438    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	  Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:43:54.330441    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:43:54.330444    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:44:04.334367    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:44:09.337610    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:44:09.338038    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:44:09.376906    4578 logs.go:276] 1 containers: [d751e569ea31]
	I0828 10:44:09.377049    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:44:09.399448    4578 logs.go:276] 1 containers: [f3ab42a808f3]
	I0828 10:44:09.399541    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:44:09.421128    4578 logs.go:276] 2 containers: [e251198522b1 f352e786668a]
	I0828 10:44:09.421202    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:44:09.432973    4578 logs.go:276] 1 containers: [d378c1964053]
	I0828 10:44:09.433044    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:44:09.444127    4578 logs.go:276] 1 containers: [927c8d8912e6]
	I0828 10:44:09.444198    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:44:09.455193    4578 logs.go:276] 1 containers: [6b81eae0040a]
	I0828 10:44:09.455260    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:44:09.465236    4578 logs.go:276] 0 containers: []
	W0828 10:44:09.465249    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:44:09.465301    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:44:09.476140    4578 logs.go:276] 1 containers: [ed2f4076ae8f]
	I0828 10:44:09.476158    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:44:09.476163    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 10:44:09.508425    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:44:09.508524    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:44:09.509818    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:44:09.509823    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:44:09.514081    4578 logs.go:123] Gathering logs for coredns [e251198522b1] ...
	I0828 10:44:09.514090    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e251198522b1"
	I0828 10:44:09.525708    4578 logs.go:123] Gathering logs for coredns [f352e786668a] ...
	I0828 10:44:09.525719    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f352e786668a"
	I0828 10:44:09.547268    4578 logs.go:123] Gathering logs for kube-proxy [927c8d8912e6] ...
	I0828 10:44:09.547283    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927c8d8912e6"
	I0828 10:44:09.559477    4578 logs.go:123] Gathering logs for storage-provisioner [ed2f4076ae8f] ...
	I0828 10:44:09.559487    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed2f4076ae8f"
	I0828 10:44:09.571465    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:44:09.571475    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:44:09.583132    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:44:09.583147    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:44:09.619970    4578 logs.go:123] Gathering logs for kube-apiserver [d751e569ea31] ...
	I0828 10:44:09.619980    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d751e569ea31"
	I0828 10:44:09.634980    4578 logs.go:123] Gathering logs for etcd [f3ab42a808f3] ...
	I0828 10:44:09.634990    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ab42a808f3"
	I0828 10:44:09.648936    4578 logs.go:123] Gathering logs for kube-scheduler [d378c1964053] ...
	I0828 10:44:09.648947    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d378c1964053"
	I0828 10:44:09.664557    4578 logs.go:123] Gathering logs for kube-controller-manager [6b81eae0040a] ...
	I0828 10:44:09.664568    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b81eae0040a"
	I0828 10:44:09.682295    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:44:09.682305    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:44:09.707191    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:44:09.707199    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 10:44:09.707222    4578 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0828 10:44:09.707227    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	  Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:44:09.707230    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	  Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:44:09.707234    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:44:09.707281    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:44:19.712901    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:44:24.716195    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:44:24.716577    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:44:24.751657    4578 logs.go:276] 1 containers: [d751e569ea31]
	I0828 10:44:24.751796    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:44:24.770762    4578 logs.go:276] 1 containers: [f3ab42a808f3]
	I0828 10:44:24.770853    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:44:24.788814    4578 logs.go:276] 4 containers: [d2115075a059 6ddcad2204e5 e251198522b1 f352e786668a]
	I0828 10:44:24.788890    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:44:24.800868    4578 logs.go:276] 1 containers: [d378c1964053]
	I0828 10:44:24.800948    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:44:24.811429    4578 logs.go:276] 1 containers: [927c8d8912e6]
	I0828 10:44:24.811501    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:44:24.821719    4578 logs.go:276] 1 containers: [6b81eae0040a]
	I0828 10:44:24.821789    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:44:24.832229    4578 logs.go:276] 0 containers: []
	W0828 10:44:24.832241    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:44:24.832297    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:44:24.842592    4578 logs.go:276] 1 containers: [ed2f4076ae8f]
	I0828 10:44:24.842608    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:44:24.842612    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:44:24.877636    4578 logs.go:123] Gathering logs for coredns [e251198522b1] ...
	I0828 10:44:24.877650    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e251198522b1"
	I0828 10:44:24.889839    4578 logs.go:123] Gathering logs for storage-provisioner [ed2f4076ae8f] ...
	I0828 10:44:24.889853    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed2f4076ae8f"
	I0828 10:44:24.901831    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:44:24.901844    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:44:24.926805    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:44:24.926814    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:44:24.939163    4578 logs.go:123] Gathering logs for kube-apiserver [d751e569ea31] ...
	I0828 10:44:24.939174    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d751e569ea31"
	I0828 10:44:24.953843    4578 logs.go:123] Gathering logs for etcd [f3ab42a808f3] ...
	I0828 10:44:24.953856    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ab42a808f3"
	I0828 10:44:24.968609    4578 logs.go:123] Gathering logs for coredns [d2115075a059] ...
	I0828 10:44:24.968620    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2115075a059"
	I0828 10:44:24.980408    4578 logs.go:123] Gathering logs for coredns [f352e786668a] ...
	I0828 10:44:24.980419    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f352e786668a"
	I0828 10:44:24.992680    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:44:24.992691    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:44:24.997900    4578 logs.go:123] Gathering logs for coredns [6ddcad2204e5] ...
	I0828 10:44:24.997906    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddcad2204e5"
	I0828 10:44:25.009497    4578 logs.go:123] Gathering logs for kube-proxy [927c8d8912e6] ...
	I0828 10:44:25.009507    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927c8d8912e6"
	I0828 10:44:25.020944    4578 logs.go:123] Gathering logs for kube-controller-manager [6b81eae0040a] ...
	I0828 10:44:25.020952    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b81eae0040a"
	I0828 10:44:25.040327    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:44:25.040337    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 10:44:25.074378    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:44:25.074477    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:44:25.075770    4578 logs.go:123] Gathering logs for kube-scheduler [d378c1964053] ...
	I0828 10:44:25.075775    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d378c1964053"
	I0828 10:44:25.097414    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:44:25.097426    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 10:44:25.097453    4578 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0828 10:44:25.097458    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	  Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:44:25.097461    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	  Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:44:25.097465    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:44:25.097468    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:44:35.102729    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:44:40.105352    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:44:40.105533    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:44:40.120420    4578 logs.go:276] 1 containers: [d751e569ea31]
	I0828 10:44:40.120506    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:44:40.133425    4578 logs.go:276] 1 containers: [f3ab42a808f3]
	I0828 10:44:40.133504    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:44:40.144126    4578 logs.go:276] 4 containers: [d2115075a059 6ddcad2204e5 e251198522b1 f352e786668a]
	I0828 10:44:40.144206    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:44:40.155316    4578 logs.go:276] 1 containers: [d378c1964053]
	I0828 10:44:40.155390    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:44:40.166176    4578 logs.go:276] 1 containers: [927c8d8912e6]
	I0828 10:44:40.166250    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:44:40.177090    4578 logs.go:276] 1 containers: [6b81eae0040a]
	I0828 10:44:40.177168    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:44:40.187303    4578 logs.go:276] 0 containers: []
	W0828 10:44:40.187315    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:44:40.187370    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:44:40.205442    4578 logs.go:276] 1 containers: [ed2f4076ae8f]
	I0828 10:44:40.205460    4578 logs.go:123] Gathering logs for etcd [f3ab42a808f3] ...
	I0828 10:44:40.205466    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ab42a808f3"
	I0828 10:44:40.219199    4578 logs.go:123] Gathering logs for coredns [d2115075a059] ...
	I0828 10:44:40.219211    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2115075a059"
	I0828 10:44:40.230867    4578 logs.go:123] Gathering logs for coredns [f352e786668a] ...
	I0828 10:44:40.230878    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f352e786668a"
	I0828 10:44:40.243132    4578 logs.go:123] Gathering logs for kube-controller-manager [6b81eae0040a] ...
	I0828 10:44:40.243148    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b81eae0040a"
	I0828 10:44:40.260661    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:44:40.260672    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:44:40.264942    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:44:40.264948    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:44:40.298787    4578 logs.go:123] Gathering logs for storage-provisioner [ed2f4076ae8f] ...
	I0828 10:44:40.298798    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed2f4076ae8f"
	I0828 10:44:40.318274    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:44:40.318286    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:44:40.343041    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:44:40.343050    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:44:40.354859    4578 logs.go:123] Gathering logs for kube-apiserver [d751e569ea31] ...
	I0828 10:44:40.354873    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d751e569ea31"
	I0828 10:44:40.369008    4578 logs.go:123] Gathering logs for kube-proxy [927c8d8912e6] ...
	I0828 10:44:40.369021    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927c8d8912e6"
	I0828 10:44:40.381767    4578 logs.go:123] Gathering logs for kube-scheduler [d378c1964053] ...
	I0828 10:44:40.381780    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d378c1964053"
	I0828 10:44:40.397138    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:44:40.397149    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 10:44:40.428718    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:44:40.428815    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:44:40.430091    4578 logs.go:123] Gathering logs for coredns [6ddcad2204e5] ...
	I0828 10:44:40.430096    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddcad2204e5"
	I0828 10:44:40.450130    4578 logs.go:123] Gathering logs for coredns [e251198522b1] ...
	I0828 10:44:40.450141    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e251198522b1"
	I0828 10:44:40.461680    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:44:40.461690    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 10:44:40.461716    4578 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0828 10:44:40.461723    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	  Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:44:40.461727    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	  Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:44:40.461731    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:44:40.461734    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:44:50.466071    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:44:55.468397    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:44:55.468498    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:44:55.482127    4578 logs.go:276] 1 containers: [d751e569ea31]
	I0828 10:44:55.482195    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:44:55.492652    4578 logs.go:276] 1 containers: [f3ab42a808f3]
	I0828 10:44:55.492726    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:44:55.503637    4578 logs.go:276] 4 containers: [d2115075a059 6ddcad2204e5 e251198522b1 f352e786668a]
	I0828 10:44:55.503709    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:44:55.521576    4578 logs.go:276] 1 containers: [d378c1964053]
	I0828 10:44:55.521647    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:44:55.532102    4578 logs.go:276] 1 containers: [927c8d8912e6]
	I0828 10:44:55.532168    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:44:55.542713    4578 logs.go:276] 1 containers: [6b81eae0040a]
	I0828 10:44:55.542777    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:44:55.553026    4578 logs.go:276] 0 containers: []
	W0828 10:44:55.553039    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:44:55.553098    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:44:55.563862    4578 logs.go:276] 1 containers: [ed2f4076ae8f]
	I0828 10:44:55.563878    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:44:55.563883    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 10:44:55.596614    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:44:55.596716    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:44:55.598036    4578 logs.go:123] Gathering logs for kube-apiserver [d751e569ea31] ...
	I0828 10:44:55.598044    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d751e569ea31"
	I0828 10:44:55.612326    4578 logs.go:123] Gathering logs for etcd [f3ab42a808f3] ...
	I0828 10:44:55.612336    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ab42a808f3"
	I0828 10:44:55.633402    4578 logs.go:123] Gathering logs for coredns [6ddcad2204e5] ...
	I0828 10:44:55.633412    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddcad2204e5"
	I0828 10:44:55.644809    4578 logs.go:123] Gathering logs for coredns [e251198522b1] ...
	I0828 10:44:55.644823    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e251198522b1"
	I0828 10:44:55.656487    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:44:55.656498    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:44:55.661226    4578 logs.go:123] Gathering logs for kube-proxy [927c8d8912e6] ...
	I0828 10:44:55.661233    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927c8d8912e6"
	I0828 10:44:55.682100    4578 logs.go:123] Gathering logs for storage-provisioner [ed2f4076ae8f] ...
	I0828 10:44:55.682111    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed2f4076ae8f"
	I0828 10:44:55.693246    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:44:55.693259    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:44:55.718586    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:44:55.718595    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:44:55.731198    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:44:55.731209    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:44:55.766669    4578 logs.go:123] Gathering logs for coredns [d2115075a059] ...
	I0828 10:44:55.766679    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2115075a059"
	I0828 10:44:55.779175    4578 logs.go:123] Gathering logs for coredns [f352e786668a] ...
	I0828 10:44:55.779186    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f352e786668a"
	I0828 10:44:55.797028    4578 logs.go:123] Gathering logs for kube-scheduler [d378c1964053] ...
	I0828 10:44:55.797039    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d378c1964053"
	I0828 10:44:55.828228    4578 logs.go:123] Gathering logs for kube-controller-manager [6b81eae0040a] ...
	I0828 10:44:55.828238    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b81eae0040a"
	I0828 10:44:55.846480    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:44:55.846490    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 10:44:55.846517    4578 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0828 10:44:55.846521    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	  Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:44:55.846524    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	  Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:44:55.846527    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:44:55.846530    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:45:05.850561    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:10.852874    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:45:10.853123    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:45:10.878507    4578 logs.go:276] 1 containers: [d751e569ea31]
	I0828 10:45:10.878605    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:45:10.897398    4578 logs.go:276] 1 containers: [f3ab42a808f3]
	I0828 10:45:10.897476    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:45:10.913584    4578 logs.go:276] 4 containers: [d2115075a059 6ddcad2204e5 e251198522b1 f352e786668a]
	I0828 10:45:10.913666    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:45:10.925007    4578 logs.go:276] 1 containers: [d378c1964053]
	I0828 10:45:10.925077    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:45:10.937555    4578 logs.go:276] 1 containers: [927c8d8912e6]
	I0828 10:45:10.937611    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:45:10.949554    4578 logs.go:276] 1 containers: [6b81eae0040a]
	I0828 10:45:10.949617    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:45:10.971766    4578 logs.go:276] 0 containers: []
	W0828 10:45:10.971779    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:45:10.971841    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:45:10.983349    4578 logs.go:276] 1 containers: [ed2f4076ae8f]
	I0828 10:45:10.983367    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:45:10.983372    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:45:10.988439    4578 logs.go:123] Gathering logs for etcd [f3ab42a808f3] ...
	I0828 10:45:10.988450    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ab42a808f3"
	I0828 10:45:11.003932    4578 logs.go:123] Gathering logs for coredns [6ddcad2204e5] ...
	I0828 10:45:11.003943    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddcad2204e5"
	I0828 10:45:11.018591    4578 logs.go:123] Gathering logs for kube-controller-manager [6b81eae0040a] ...
	I0828 10:45:11.018602    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b81eae0040a"
	I0828 10:45:11.036880    4578 logs.go:123] Gathering logs for kube-apiserver [d751e569ea31] ...
	I0828 10:45:11.036890    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d751e569ea31"
	I0828 10:45:11.062886    4578 logs.go:123] Gathering logs for coredns [f352e786668a] ...
	I0828 10:45:11.062900    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f352e786668a"
	I0828 10:45:11.076441    4578 logs.go:123] Gathering logs for kube-scheduler [d378c1964053] ...
	I0828 10:45:11.076453    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d378c1964053"
	I0828 10:45:11.092897    4578 logs.go:123] Gathering logs for storage-provisioner [ed2f4076ae8f] ...
	I0828 10:45:11.092912    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed2f4076ae8f"
	I0828 10:45:11.105160    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:45:11.105173    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:45:11.131177    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:45:11.131191    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 10:45:11.167064    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:45:11.167169    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:45:11.168506    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:45:11.168516    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:45:11.206199    4578 logs.go:123] Gathering logs for coredns [d2115075a059] ...
	I0828 10:45:11.206210    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2115075a059"
	I0828 10:45:11.223005    4578 logs.go:123] Gathering logs for coredns [e251198522b1] ...
	I0828 10:45:11.223017    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e251198522b1"
	I0828 10:45:11.235363    4578 logs.go:123] Gathering logs for kube-proxy [927c8d8912e6] ...
	I0828 10:45:11.235375    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927c8d8912e6"
	I0828 10:45:11.247630    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:45:11.247642    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:45:11.260115    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:45:11.260127    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 10:45:11.260156    4578 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0828 10:45:11.260161    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	  Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:45:11.260166    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	  Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:45:11.260170    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:45:11.260174    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:45:21.262840    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:26.265010    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:45:26.265234    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:45:26.285017    4578 logs.go:276] 1 containers: [d751e569ea31]
	I0828 10:45:26.285101    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:45:26.299222    4578 logs.go:276] 1 containers: [f3ab42a808f3]
	I0828 10:45:26.299297    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:45:26.311112    4578 logs.go:276] 4 containers: [d2115075a059 6ddcad2204e5 e251198522b1 f352e786668a]
	I0828 10:45:26.311195    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:45:26.326371    4578 logs.go:276] 1 containers: [d378c1964053]
	I0828 10:45:26.326432    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:45:26.337506    4578 logs.go:276] 1 containers: [927c8d8912e6]
	I0828 10:45:26.337575    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:45:26.348429    4578 logs.go:276] 1 containers: [6b81eae0040a]
	I0828 10:45:26.348484    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:45:26.358680    4578 logs.go:276] 0 containers: []
	W0828 10:45:26.358692    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:45:26.358745    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:45:26.369106    4578 logs.go:276] 1 containers: [ed2f4076ae8f]
	I0828 10:45:26.369127    4578 logs.go:123] Gathering logs for coredns [f352e786668a] ...
	I0828 10:45:26.369134    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f352e786668a"
	I0828 10:45:26.381193    4578 logs.go:123] Gathering logs for etcd [f3ab42a808f3] ...
	I0828 10:45:26.381204    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ab42a808f3"
	I0828 10:45:26.394677    4578 logs.go:123] Gathering logs for coredns [d2115075a059] ...
	I0828 10:45:26.394687    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2115075a059"
	I0828 10:45:26.406018    4578 logs.go:123] Gathering logs for storage-provisioner [ed2f4076ae8f] ...
	I0828 10:45:26.406029    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed2f4076ae8f"
	I0828 10:45:26.417215    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:45:26.417226    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:45:26.441747    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:45:26.441756    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:45:26.453205    4578 logs.go:123] Gathering logs for kube-apiserver [d751e569ea31] ...
	I0828 10:45:26.453216    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d751e569ea31"
	I0828 10:45:26.469075    4578 logs.go:123] Gathering logs for coredns [6ddcad2204e5] ...
	I0828 10:45:26.469085    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddcad2204e5"
	I0828 10:45:26.486470    4578 logs.go:123] Gathering logs for kube-proxy [927c8d8912e6] ...
	I0828 10:45:26.486480    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927c8d8912e6"
	I0828 10:45:26.498594    4578 logs.go:123] Gathering logs for kube-controller-manager [6b81eae0040a] ...
	I0828 10:45:26.498606    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b81eae0040a"
	I0828 10:45:26.515756    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:45:26.515767    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:45:26.520582    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:45:26.520588    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:45:26.556033    4578 logs.go:123] Gathering logs for coredns [e251198522b1] ...
	I0828 10:45:26.556044    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e251198522b1"
	I0828 10:45:26.567833    4578 logs.go:123] Gathering logs for kube-scheduler [d378c1964053] ...
	I0828 10:45:26.567845    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d378c1964053"
	I0828 10:45:26.585800    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:45:26.585809    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 10:45:26.618318    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:45:26.618421    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:45:26.619755    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:45:26.619765    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 10:45:26.619795    4578 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0828 10:45:26.619801    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	  Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:45:26.619805    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	  Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:45:26.619849    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:45:26.619878    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:45:36.622476    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:41.624635    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:45:41.624867    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:45:41.650333    4578 logs.go:276] 1 containers: [d751e569ea31]
	I0828 10:45:41.650448    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:45:41.666820    4578 logs.go:276] 1 containers: [f3ab42a808f3]
	I0828 10:45:41.666908    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:45:41.680480    4578 logs.go:276] 4 containers: [d2115075a059 6ddcad2204e5 e251198522b1 f352e786668a]
	I0828 10:45:41.680551    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:45:41.692155    4578 logs.go:276] 1 containers: [d378c1964053]
	I0828 10:45:41.692221    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:45:41.702909    4578 logs.go:276] 1 containers: [927c8d8912e6]
	I0828 10:45:41.702976    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:45:41.717146    4578 logs.go:276] 1 containers: [6b81eae0040a]
	I0828 10:45:41.717220    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:45:41.727179    4578 logs.go:276] 0 containers: []
	W0828 10:45:41.727191    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:45:41.727246    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:45:41.738188    4578 logs.go:276] 1 containers: [ed2f4076ae8f]
	I0828 10:45:41.738207    4578 logs.go:123] Gathering logs for kube-apiserver [d751e569ea31] ...
	I0828 10:45:41.738212    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d751e569ea31"
	I0828 10:45:41.752640    4578 logs.go:123] Gathering logs for coredns [6ddcad2204e5] ...
	I0828 10:45:41.752651    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddcad2204e5"
	I0828 10:45:41.764746    4578 logs.go:123] Gathering logs for kube-scheduler [d378c1964053] ...
	I0828 10:45:41.764755    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d378c1964053"
	I0828 10:45:41.779521    4578 logs.go:123] Gathering logs for kube-controller-manager [6b81eae0040a] ...
	I0828 10:45:41.779534    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b81eae0040a"
	I0828 10:45:41.797018    4578 logs.go:123] Gathering logs for kube-proxy [927c8d8912e6] ...
	I0828 10:45:41.797033    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927c8d8912e6"
	I0828 10:45:41.809569    4578 logs.go:123] Gathering logs for storage-provisioner [ed2f4076ae8f] ...
	I0828 10:45:41.809581    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed2f4076ae8f"
	I0828 10:45:41.828336    4578 logs.go:123] Gathering logs for coredns [d2115075a059] ...
	I0828 10:45:41.828346    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2115075a059"
	I0828 10:45:41.840818    4578 logs.go:123] Gathering logs for coredns [e251198522b1] ...
	I0828 10:45:41.840828    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e251198522b1"
	I0828 10:45:41.852919    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:45:41.852930    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:45:41.865348    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:45:41.865360    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 10:45:41.900244    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:45:41.900345    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:45:41.901631    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:45:41.901640    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:45:41.906605    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:45:41.906614    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:45:41.942272    4578 logs.go:123] Gathering logs for etcd [f3ab42a808f3] ...
	I0828 10:45:41.942284    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ab42a808f3"
	I0828 10:45:41.956721    4578 logs.go:123] Gathering logs for coredns [f352e786668a] ...
	I0828 10:45:41.956735    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f352e786668a"
	I0828 10:45:41.969308    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:45:41.969318    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:45:41.993936    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:45:41.993948    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 10:45:41.993977    4578 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0828 10:45:41.993982    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	  Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:45:41.993986    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	  Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:45:41.993989    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:45:41.993992    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:45:51.997802    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:56.999896    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:45:57.000088    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:45:57.012672    4578 logs.go:276] 1 containers: [d751e569ea31]
	I0828 10:45:57.012744    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:45:57.023683    4578 logs.go:276] 1 containers: [f3ab42a808f3]
	I0828 10:45:57.023752    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:45:57.034718    4578 logs.go:276] 4 containers: [d2115075a059 6ddcad2204e5 e251198522b1 f352e786668a]
	I0828 10:45:57.034792    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:45:57.046001    4578 logs.go:276] 1 containers: [d378c1964053]
	I0828 10:45:57.046064    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:45:57.058321    4578 logs.go:276] 1 containers: [927c8d8912e6]
	I0828 10:45:57.058388    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:45:57.069367    4578 logs.go:276] 1 containers: [6b81eae0040a]
	I0828 10:45:57.069431    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:45:57.081160    4578 logs.go:276] 0 containers: []
	W0828 10:45:57.081173    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:45:57.081235    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:45:57.093539    4578 logs.go:276] 1 containers: [ed2f4076ae8f]
	I0828 10:45:57.093560    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:45:57.093567    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 10:45:57.129312    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:45:57.129420    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:45:57.130755    4578 logs.go:123] Gathering logs for kube-apiserver [d751e569ea31] ...
	I0828 10:45:57.130768    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d751e569ea31"
	I0828 10:45:57.147521    4578 logs.go:123] Gathering logs for kube-scheduler [d378c1964053] ...
	I0828 10:45:57.147543    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d378c1964053"
	I0828 10:45:57.163087    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:45:57.163101    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:45:57.211321    4578 logs.go:123] Gathering logs for coredns [6ddcad2204e5] ...
	I0828 10:45:57.211331    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddcad2204e5"
	I0828 10:45:57.227206    4578 logs.go:123] Gathering logs for kube-proxy [927c8d8912e6] ...
	I0828 10:45:57.227218    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927c8d8912e6"
	I0828 10:45:57.243642    4578 logs.go:123] Gathering logs for coredns [d2115075a059] ...
	I0828 10:45:57.243658    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2115075a059"
	I0828 10:45:57.263157    4578 logs.go:123] Gathering logs for coredns [f352e786668a] ...
	I0828 10:45:57.263169    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f352e786668a"
	I0828 10:45:57.278454    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:45:57.278470    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:45:57.302928    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:45:57.302946    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:45:57.319258    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:45:57.319270    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:45:57.324695    4578 logs.go:123] Gathering logs for etcd [f3ab42a808f3] ...
	I0828 10:45:57.324711    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ab42a808f3"
	I0828 10:45:57.339922    4578 logs.go:123] Gathering logs for coredns [e251198522b1] ...
	I0828 10:45:57.339935    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e251198522b1"
	I0828 10:45:57.353006    4578 logs.go:123] Gathering logs for kube-controller-manager [6b81eae0040a] ...
	I0828 10:45:57.353021    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b81eae0040a"
	I0828 10:45:57.371047    4578 logs.go:123] Gathering logs for storage-provisioner [ed2f4076ae8f] ...
	I0828 10:45:57.371057    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed2f4076ae8f"
	I0828 10:45:57.385541    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:45:57.385551    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 10:45:57.385578    4578 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0828 10:45:57.385583    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	  Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:45:57.385600    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	  Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:45:57.385606    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:45:57.385610    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:46:07.388606    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:46:12.389406    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:46:12.393600    4578 out.go:201] 
	W0828 10:46:12.397288    4578 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0828 10:46:12.397306    4578 out.go:270] * 
	* 
	W0828 10:46:12.398548    4578 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:46:12.409356    4578 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-717000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-28 10:46:12.518117 -0700 PDT m=+3344.449060292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-717000 -n running-upgrade-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-717000 -n running-upgrade-717000: exit status 2 (15.71073875s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-717000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-581000          | force-systemd-flag-581000 | jenkins | v1.33.1 | 28 Aug 24 10:36 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-611000              | force-systemd-env-611000  | jenkins | v1.33.1 | 28 Aug 24 10:36 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-611000           | force-systemd-env-611000  | jenkins | v1.33.1 | 28 Aug 24 10:36 PDT | 28 Aug 24 10:36 PDT |
	| start   | -p docker-flags-261000                | docker-flags-261000       | jenkins | v1.33.1 | 28 Aug 24 10:36 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-581000             | force-systemd-flag-581000 | jenkins | v1.33.1 | 28 Aug 24 10:36 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-581000          | force-systemd-flag-581000 | jenkins | v1.33.1 | 28 Aug 24 10:36 PDT | 28 Aug 24 10:36 PDT |
	| start   | -p cert-expiration-705000             | cert-expiration-705000    | jenkins | v1.33.1 | 28 Aug 24 10:36 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-261000 ssh               | docker-flags-261000       | jenkins | v1.33.1 | 28 Aug 24 10:36 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-261000 ssh               | docker-flags-261000       | jenkins | v1.33.1 | 28 Aug 24 10:36 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-261000                | docker-flags-261000       | jenkins | v1.33.1 | 28 Aug 24 10:36 PDT | 28 Aug 24 10:36 PDT |
	| start   | -p cert-options-402000                | cert-options-402000       | jenkins | v1.33.1 | 28 Aug 24 10:36 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-402000 ssh               | cert-options-402000       | jenkins | v1.33.1 | 28 Aug 24 10:36 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-402000 -- sudo        | cert-options-402000       | jenkins | v1.33.1 | 28 Aug 24 10:36 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-402000                | cert-options-402000       | jenkins | v1.33.1 | 28 Aug 24 10:36 PDT | 28 Aug 24 10:36 PDT |
	| start   | -p running-upgrade-717000             | minikube                  | jenkins | v1.26.0 | 28 Aug 24 10:36 PDT | 28 Aug 24 10:37 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-717000             | running-upgrade-717000    | jenkins | v1.33.1 | 28 Aug 24 10:37 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-705000             | cert-expiration-705000    | jenkins | v1.33.1 | 28 Aug 24 10:39 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-705000             | cert-expiration-705000    | jenkins | v1.33.1 | 28 Aug 24 10:39 PDT | 28 Aug 24 10:39 PDT |
	| start   | -p kubernetes-upgrade-149000          | kubernetes-upgrade-149000 | jenkins | v1.33.1 | 28 Aug 24 10:39 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-149000          | kubernetes-upgrade-149000 | jenkins | v1.33.1 | 28 Aug 24 10:39 PDT | 28 Aug 24 10:39 PDT |
	| start   | -p kubernetes-upgrade-149000          | kubernetes-upgrade-149000 | jenkins | v1.33.1 | 28 Aug 24 10:39 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-149000          | kubernetes-upgrade-149000 | jenkins | v1.33.1 | 28 Aug 24 10:40 PDT | 28 Aug 24 10:40 PDT |
	| start   | -p stopped-upgrade-801000             | minikube                  | jenkins | v1.26.0 | 28 Aug 24 10:40 PDT | 28 Aug 24 10:40 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-801000 stop           | minikube                  | jenkins | v1.26.0 | 28 Aug 24 10:40 PDT | 28 Aug 24 10:41 PDT |
	| start   | -p stopped-upgrade-801000             | stopped-upgrade-801000    | jenkins | v1.33.1 | 28 Aug 24 10:41 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 10:41:00
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 10:41:00.829660    4717 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:41:00.829831    4717 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:41:00.829837    4717 out.go:358] Setting ErrFile to fd 2...
	I0828 10:41:00.829840    4717 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:41:00.830001    4717 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:41:00.831304    4717 out.go:352] Setting JSON to false
	I0828 10:41:00.850876    4717 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4224,"bootTime":1724862636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:41:00.850953    4717 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:41:00.855649    4717 out.go:177] * [stopped-upgrade-801000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:41:00.863502    4717 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:41:00.863535    4717 notify.go:220] Checking for updates...
	I0828 10:41:00.869550    4717 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:41:00.872546    4717 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:41:00.875582    4717 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:41:00.878574    4717 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:41:00.881520    4717 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:41:00.884880    4717 config.go:182] Loaded profile config "stopped-upgrade-801000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0828 10:41:00.888517    4717 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0828 10:41:00.891556    4717 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:41:00.895536    4717 out.go:177] * Using the qemu2 driver based on existing profile
	I0828 10:41:00.901453    4717 start.go:297] selected driver: qemu2
	I0828 10:41:00.901461    4717 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-801000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-801000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0828 10:41:00.901523    4717 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:41:00.903732    4717 cni.go:84] Creating CNI manager for ""
	I0828 10:41:00.903751    4717 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:41:00.903770    4717 start.go:340] cluster config:
	{Name:stopped-upgrade-801000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-801000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0828 10:41:00.903817    4717 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:41:00.912539    4717 out.go:177] * Starting "stopped-upgrade-801000" primary control-plane node in "stopped-upgrade-801000" cluster
	I0828 10:41:00.916521    4717 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0828 10:41:00.916538    4717 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0828 10:41:00.916545    4717 cache.go:56] Caching tarball of preloaded images
	I0828 10:41:00.916614    4717 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:41:00.916620    4717 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0828 10:41:00.916667    4717 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/config.json ...
	I0828 10:41:00.917127    4717 start.go:360] acquireMachinesLock for stopped-upgrade-801000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:41:00.917160    4717 start.go:364] duration metric: took 27.75µs to acquireMachinesLock for "stopped-upgrade-801000"
	I0828 10:41:00.917170    4717 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:41:00.917176    4717 fix.go:54] fixHost starting: 
	I0828 10:41:00.917285    4717 fix.go:112] recreateIfNeeded on stopped-upgrade-801000: state=Stopped err=<nil>
	W0828 10:41:00.917293    4717 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:41:00.925513    4717 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-801000" ...
	I0828 10:40:56.932119    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:40:56.932294    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:40:56.946074    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:40:56.946157    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:40:56.957864    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:40:56.957933    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:40:56.971549    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:40:56.971619    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:40:56.982551    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:40:56.982622    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:40:56.996412    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:40:56.996479    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:40:57.006756    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:40:57.006822    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:40:57.018923    4578 logs.go:276] 0 containers: []
	W0828 10:40:57.018935    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:40:57.018993    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:40:57.029534    4578 logs.go:276] 0 containers: []
	W0828 10:40:57.029552    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:40:57.029560    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:40:57.029566    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:40:57.045771    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:40:57.045780    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:40:57.063886    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:40:57.063898    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:40:57.104295    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:40:57.104307    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:40:57.118544    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:40:57.118556    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:40:57.130453    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:40:57.130466    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:40:57.141984    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:40:57.141995    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:40:57.154781    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:40:57.154793    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:40:57.168977    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:40:57.168989    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:40:57.173428    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:40:57.173436    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:40:57.193503    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:40:57.193513    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:40:57.218447    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:40:57.218456    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:40:57.230382    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:40:57.230396    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:40:57.269043    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:40:57.269062    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:40:57.284804    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:40:57.284815    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:40:59.809902    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:00.929545    4717 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:41:00.929625    4717 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/stopped-upgrade-801000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/stopped-upgrade-801000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/stopped-upgrade-801000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50471-:22,hostfwd=tcp::50472-:2376,hostname=stopped-upgrade-801000 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/stopped-upgrade-801000/disk.qcow2
	I0828 10:41:00.976133    4717 main.go:141] libmachine: STDOUT: 
	I0828 10:41:00.976173    4717 main.go:141] libmachine: STDERR: 
	I0828 10:41:00.976179    4717 main.go:141] libmachine: Waiting for VM to start (ssh -p 50471 docker@127.0.0.1)...
	I0828 10:41:04.812581    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:04.812967    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:41:04.852058    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:41:04.852199    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:41:04.873610    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:41:04.873701    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:41:04.888958    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:41:04.889038    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:41:04.901621    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:41:04.901689    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:41:04.918834    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:41:04.918907    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:41:04.930005    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:41:04.930065    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:41:04.940492    4578 logs.go:276] 0 containers: []
	W0828 10:41:04.940503    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:41:04.940561    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:41:04.950591    4578 logs.go:276] 0 containers: []
	W0828 10:41:04.950607    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:41:04.950614    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:41:04.950618    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:41:04.969187    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:41:04.969197    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:41:04.986314    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:41:04.986324    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:41:04.997830    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:41:04.997843    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:41:05.034898    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:41:05.034911    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:41:05.050979    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:41:05.050992    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:41:05.066242    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:41:05.066255    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:41:05.091645    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:41:05.091656    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:41:05.103576    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:41:05.103586    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:41:05.107818    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:41:05.107826    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:41:05.122003    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:41:05.122012    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:41:05.135604    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:41:05.135614    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:41:05.152669    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:41:05.152681    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:41:05.188908    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:41:05.188917    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:41:05.202434    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:41:05.202443    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:41:07.724623    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:12.726700    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:12.726813    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:41:12.737776    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:41:12.737852    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:41:12.748674    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:41:12.748751    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:41:12.775782    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:41:12.775857    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:41:12.788243    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:41:12.788311    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:41:12.804842    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:41:12.804907    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:41:12.818474    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:41:12.818545    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:41:12.829919    4578 logs.go:276] 0 containers: []
	W0828 10:41:12.829932    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:41:12.829997    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:41:12.840775    4578 logs.go:276] 0 containers: []
	W0828 10:41:12.840788    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:41:12.840796    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:41:12.840801    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:41:12.853799    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:41:12.853810    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:41:12.871980    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:41:12.871996    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:41:12.896357    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:41:12.896368    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:41:12.901032    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:41:12.901044    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:41:12.915788    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:41:12.915799    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:41:12.927981    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:41:12.927996    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:41:12.945491    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:41:12.945503    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:41:12.985430    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:41:12.985446    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:41:13.000644    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:41:13.000656    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:41:13.015651    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:41:13.015662    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:41:13.027275    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:41:13.027289    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:41:13.061101    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:41:13.061115    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:41:13.082111    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:41:13.082124    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:41:13.095926    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:41:13.095939    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:41:15.610560    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:20.612756    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:20.613052    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:41:20.641676    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:41:20.641796    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:41:20.660408    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:41:20.660484    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:41:20.673400    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:41:20.673463    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:41:20.685388    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:41:20.685461    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:41:20.695866    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:41:20.695934    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:41:20.706214    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:41:20.706284    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:41:20.716466    4578 logs.go:276] 0 containers: []
	W0828 10:41:20.716479    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:41:20.716537    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:41:20.726718    4578 logs.go:276] 0 containers: []
	W0828 10:41:20.726742    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:41:20.726750    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:41:20.726756    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:41:20.746371    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:41:20.746381    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:41:20.760472    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:41:20.760485    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:41:20.771494    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:41:20.771503    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:41:20.775839    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:41:20.775846    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:41:20.809151    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:41:20.809165    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:41:20.825735    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:41:20.825748    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:41:20.843429    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:41:20.843440    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:41:20.880865    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:41:20.880872    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:41:20.894276    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:41:20.894287    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:41:20.907344    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:41:20.907356    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:41:20.921891    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:41:20.921901    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:41:20.941679    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:41:20.941692    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:41:20.953750    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:41:20.953763    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:41:20.976298    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:41:20.976305    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:41:21.390153    4717 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/config.json ...
	I0828 10:41:21.390715    4717 machine.go:93] provisionDockerMachine start ...
	I0828 10:41:21.390840    4717 main.go:141] libmachine: Using SSH client type: native
	I0828 10:41:21.391203    4717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051bc5a0] 0x1051bee00 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0828 10:41:21.391213    4717 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 10:41:21.470185    4717 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 10:41:21.470217    4717 buildroot.go:166] provisioning hostname "stopped-upgrade-801000"
	I0828 10:41:21.470310    4717 main.go:141] libmachine: Using SSH client type: native
	I0828 10:41:21.470527    4717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051bc5a0] 0x1051bee00 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0828 10:41:21.470540    4717 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-801000 && echo "stopped-upgrade-801000" | sudo tee /etc/hostname
	I0828 10:41:21.551887    4717 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-801000
	
	I0828 10:41:21.551954    4717 main.go:141] libmachine: Using SSH client type: native
	I0828 10:41:21.552111    4717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051bc5a0] 0x1051bee00 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0828 10:41:21.552122    4717 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-801000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-801000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-801000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 10:41:21.623916    4717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 10:41:21.623929    4717 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19529-1176/.minikube CaCertPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19529-1176/.minikube}
	I0828 10:41:21.623938    4717 buildroot.go:174] setting up certificates
	I0828 10:41:21.623944    4717 provision.go:84] configureAuth start
	I0828 10:41:21.623952    4717 provision.go:143] copyHostCerts
	I0828 10:41:21.624041    4717 exec_runner.go:144] found /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.pem, removing ...
	I0828 10:41:21.624049    4717 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.pem
	I0828 10:41:21.624178    4717 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.pem (1078 bytes)
	I0828 10:41:21.624400    4717 exec_runner.go:144] found /Users/jenkins/minikube-integration/19529-1176/.minikube/cert.pem, removing ...
	I0828 10:41:21.624404    4717 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19529-1176/.minikube/cert.pem
	I0828 10:41:21.624466    4717 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19529-1176/.minikube/cert.pem (1123 bytes)
	I0828 10:41:21.624600    4717 exec_runner.go:144] found /Users/jenkins/minikube-integration/19529-1176/.minikube/key.pem, removing ...
	I0828 10:41:21.624604    4717 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19529-1176/.minikube/key.pem
	I0828 10:41:21.624662    4717 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19529-1176/.minikube/key.pem (1679 bytes)
	I0828 10:41:21.624773    4717 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-801000 san=[127.0.0.1 localhost minikube stopped-upgrade-801000]
	I0828 10:41:21.782020    4717 provision.go:177] copyRemoteCerts
	I0828 10:41:21.782065    4717 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 10:41:21.782074    4717 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/stopped-upgrade-801000/id_rsa Username:docker}
	I0828 10:41:21.814285    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 10:41:21.821158    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0828 10:41:21.827874    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0828 10:41:21.835083    4717 provision.go:87] duration metric: took 211.142833ms to configureAuth
	I0828 10:41:21.835092    4717 buildroot.go:189] setting minikube options for container-runtime
	I0828 10:41:21.835187    4717 config.go:182] Loaded profile config "stopped-upgrade-801000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0828 10:41:21.835225    4717 main.go:141] libmachine: Using SSH client type: native
	I0828 10:41:21.835307    4717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051bc5a0] 0x1051bee00 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0828 10:41:21.835312    4717 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0828 10:41:21.900773    4717 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0828 10:41:21.900781    4717 buildroot.go:70] root file system type: tmpfs
	I0828 10:41:21.900831    4717 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0828 10:41:21.900876    4717 main.go:141] libmachine: Using SSH client type: native
	I0828 10:41:21.900994    4717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051bc5a0] 0x1051bee00 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0828 10:41:21.901028    4717 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0828 10:41:21.964593    4717 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0828 10:41:21.964650    4717 main.go:141] libmachine: Using SSH client type: native
	I0828 10:41:21.964758    4717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051bc5a0] 0x1051bee00 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0828 10:41:21.964768    4717 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0828 10:41:22.339436    4717 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0828 10:41:22.339449    4717 machine.go:96] duration metric: took 948.758334ms to provisionDockerMachine
	I0828 10:41:22.339463    4717 start.go:293] postStartSetup for "stopped-upgrade-801000" (driver="qemu2")
	I0828 10:41:22.339470    4717 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 10:41:22.339523    4717 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 10:41:22.339531    4717 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/stopped-upgrade-801000/id_rsa Username:docker}
	I0828 10:41:22.374510    4717 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 10:41:22.375895    4717 info.go:137] Remote host: Buildroot 2021.02.12
	I0828 10:41:22.375903    4717 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19529-1176/.minikube/addons for local assets ...
	I0828 10:41:22.375987    4717 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19529-1176/.minikube/files for local assets ...
	I0828 10:41:22.376102    4717 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19529-1176/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0828 10:41:22.376235    4717 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 10:41:22.379366    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0828 10:41:22.386532    4717 start.go:296] duration metric: took 47.065042ms for postStartSetup
	I0828 10:41:22.386547    4717 fix.go:56] duration metric: took 21.470148417s for fixHost
	I0828 10:41:22.386582    4717 main.go:141] libmachine: Using SSH client type: native
	I0828 10:41:22.386692    4717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051bc5a0] 0x1051bee00 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0828 10:41:22.386697    4717 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 10:41:22.448559    4717 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724866882.709622421
	
	I0828 10:41:22.448568    4717 fix.go:216] guest clock: 1724866882.709622421
	I0828 10:41:22.448572    4717 fix.go:229] Guest: 2024-08-28 10:41:22.709622421 -0700 PDT Remote: 2024-08-28 10:41:22.386548 -0700 PDT m=+21.587789126 (delta=323.074421ms)
	I0828 10:41:22.448584    4717 fix.go:200] guest clock delta is within tolerance: 323.074421ms
	I0828 10:41:22.448587    4717 start.go:83] releasing machines lock for "stopped-upgrade-801000", held for 21.532199042s
	I0828 10:41:22.448659    4717 ssh_runner.go:195] Run: cat /version.json
	I0828 10:41:22.448670    4717 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/stopped-upgrade-801000/id_rsa Username:docker}
	I0828 10:41:22.448659    4717 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 10:41:22.448704    4717 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/stopped-upgrade-801000/id_rsa Username:docker}
	W0828 10:41:22.449269    4717 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50471: connect: connection refused
	I0828 10:41:22.449289    4717 retry.go:31] will retry after 372.152083ms: dial tcp [::1]:50471: connect: connection refused
	W0828 10:41:22.480614    4717 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0828 10:41:22.480674    4717 ssh_runner.go:195] Run: systemctl --version
	I0828 10:41:22.482384    4717 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 10:41:22.483922    4717 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 10:41:22.483947    4717 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0828 10:41:22.486999    4717 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0828 10:41:22.491630    4717 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 10:41:22.491638    4717 start.go:495] detecting cgroup driver to use...
	I0828 10:41:22.491718    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 10:41:22.498352    4717 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0828 10:41:22.501202    4717 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0828 10:41:22.503909    4717 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0828 10:41:22.503935    4717 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0828 10:41:22.507256    4717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0828 10:41:22.510659    4717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0828 10:41:22.513573    4717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0828 10:41:22.516291    4717 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 10:41:22.519510    4717 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0828 10:41:22.522742    4717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0828 10:41:22.525870    4717 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0828 10:41:22.528836    4717 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 10:41:22.531577    4717 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 10:41:22.534862    4717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:41:22.617438    4717 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0828 10:41:22.625221    4717 start.go:495] detecting cgroup driver to use...
	I0828 10:41:22.625302    4717 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0828 10:41:22.630649    4717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 10:41:22.635465    4717 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 10:41:22.642382    4717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 10:41:22.646873    4717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0828 10:41:22.651385    4717 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0828 10:41:22.709206    4717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0828 10:41:22.714133    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 10:41:22.719262    4717 ssh_runner.go:195] Run: which cri-dockerd
	I0828 10:41:22.720661    4717 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0828 10:41:22.723470    4717 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0828 10:41:22.728929    4717 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0828 10:41:22.808878    4717 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0828 10:41:22.890003    4717 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0828 10:41:22.890060    4717 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0828 10:41:22.895505    4717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:41:22.977653    4717 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0828 10:41:24.132989    4717 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.155361584s)
	I0828 10:41:24.133063    4717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0828 10:41:24.137668    4717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0828 10:41:24.142112    4717 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0828 10:41:24.217403    4717 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0828 10:41:24.286660    4717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:41:24.362773    4717 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0828 10:41:24.368819    4717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0828 10:41:24.373187    4717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:41:24.449029    4717 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0828 10:41:24.487318    4717 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0828 10:41:24.487408    4717 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0828 10:41:24.489546    4717 start.go:563] Will wait 60s for crictl version
	I0828 10:41:24.489600    4717 ssh_runner.go:195] Run: which crictl
	I0828 10:41:24.491118    4717 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 10:41:24.505908    4717 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0828 10:41:24.505972    4717 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0828 10:41:24.522225    4717 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0828 10:41:24.543949    4717 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0828 10:41:24.544063    4717 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0828 10:41:24.545436    4717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 10:41:24.548896    4717 kubeadm.go:883] updating cluster {Name:stopped-upgrade-801000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-801000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0828 10:41:24.548937    4717 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0828 10:41:24.548973    4717 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0828 10:41:24.559328    4717 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0828 10:41:24.559339    4717 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0828 10:41:24.559385    4717 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0828 10:41:24.562937    4717 ssh_runner.go:195] Run: which lz4
	I0828 10:41:24.564150    4717 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 10:41:24.565446    4717 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 10:41:24.565455    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0828 10:41:25.487625    4717 docker.go:649] duration metric: took 923.534291ms to copy over tarball
	I0828 10:41:25.487679    4717 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 10:41:23.489708    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:26.638994    4717 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.15134275s)
	I0828 10:41:26.639008    4717 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 10:41:26.654798    4717 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0828 10:41:26.657957    4717 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0828 10:41:26.663043    4717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:41:26.750163    4717 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0828 10:41:28.332505    4717 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.582381875s)
	I0828 10:41:28.332585    4717 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0828 10:41:28.353088    4717 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0828 10:41:28.353098    4717 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0828 10:41:28.353104    4717 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0828 10:41:28.357083    4717 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 10:41:28.358713    4717 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0828 10:41:28.360983    4717 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 10:41:28.361194    4717 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0828 10:41:28.363391    4717 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0828 10:41:28.363555    4717 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0828 10:41:28.365477    4717 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0828 10:41:28.365477    4717 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0828 10:41:28.366860    4717 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0828 10:41:28.366966    4717 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0828 10:41:28.368762    4717 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0828 10:41:28.369128    4717 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0828 10:41:28.369790    4717 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0828 10:41:28.369799    4717 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0828 10:41:28.370570    4717 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0828 10:41:28.371085    4717 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	W0828 10:41:29.368069    4717 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0828 10:41:29.368199    4717 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0828 10:41:29.379814    4717 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0828 10:41:29.379845    4717 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0828 10:41:29.379892    4717 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0828 10:41:29.390775    4717 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0828 10:41:29.390893    4717 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0828 10:41:29.393396    4717 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0828 10:41:29.393411    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0828 10:41:29.404635    4717 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0828 10:41:29.416528    4717 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0828 10:41:29.419686    4717 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0828 10:41:29.431822    4717 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0828 10:41:29.431844    4717 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0828 10:41:29.431895    4717 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0828 10:41:29.447523    4717 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0828 10:41:29.447538    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0828 10:41:29.450705    4717 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0828 10:41:29.450732    4717 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0828 10:41:29.450786    4717 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0828 10:41:29.460614    4717 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0828 10:41:29.460634    4717 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0828 10:41:29.460637    4717 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0828 10:41:29.460691    4717 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0828 10:41:29.503624    4717 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0828 10:41:29.503671    4717 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0828 10:41:29.503697    4717 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0828 10:41:29.503789    4717 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0828 10:41:29.505170    4717 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0828 10:41:29.505179    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0828 10:41:29.512390    4717 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0828 10:41:29.512398    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0828 10:41:29.528649    4717 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0828 10:41:29.528750    4717 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 10:41:29.545332    4717 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0828 10:41:29.545363    4717 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0828 10:41:29.545379    4717 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 10:41:29.545429    4717 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 10:41:29.558561    4717 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0828 10:41:29.558679    4717 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0828 10:41:29.560089    4717 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0828 10:41:29.560103    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0828 10:41:29.589627    4717 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0828 10:41:29.591044    4717 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0828 10:41:29.591053    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0828 10:41:29.591445    4717 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0828 10:41:29.597885    4717 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0828 10:41:29.615246    4717 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0828 10:41:29.615269    4717 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0828 10:41:29.615337    4717 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0828 10:41:29.847800    4717 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0828 10:41:29.847824    4717 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0828 10:41:29.847850    4717 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0828 10:41:29.847864    4717 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0828 10:41:29.847883    4717 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0828 10:41:29.847910    4717 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0828 10:41:29.847910    4717 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0828 10:41:29.847950    4717 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0828 10:41:29.861164    4717 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0828 10:41:29.861166    4717 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0828 10:41:29.861222    4717 cache_images.go:92] duration metric: took 1.508166292s to LoadCachedImages
	W0828 10:41:29.861265    4717 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0828 10:41:29.861270    4717 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0828 10:41:29.861319    4717 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-801000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-801000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 10:41:29.861376    4717 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0828 10:41:29.874642    4717 cni.go:84] Creating CNI manager for ""
	I0828 10:41:29.874654    4717 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:41:29.874660    4717 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 10:41:29.874668    4717 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-801000 NodeName:stopped-upgrade-801000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 10:41:29.874736    4717 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-801000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 10:41:29.874787    4717 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0828 10:41:29.878332    4717 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 10:41:29.878360    4717 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 10:41:29.881218    4717 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0828 10:41:29.885994    4717 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 10:41:29.890802    4717 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0828 10:41:29.896009    4717 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0828 10:41:29.897245    4717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 10:41:29.900578    4717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:41:29.967952    4717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 10:41:29.973453    4717 certs.go:68] Setting up /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000 for IP: 10.0.2.15
	I0828 10:41:29.973461    4717 certs.go:194] generating shared ca certs ...
	I0828 10:41:29.973470    4717 certs.go:226] acquiring lock for ca certs: {Name:mkf861e7f19b199967d33246b8c25f60e0670f76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:41:29.973639    4717 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.key
	I0828 10:41:29.973688    4717 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/proxy-client-ca.key
	I0828 10:41:29.973694    4717 certs.go:256] generating profile certs ...
	I0828 10:41:29.973767    4717 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/client.key
	I0828 10:41:29.973784    4717 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/apiserver.key.d629ac91
	I0828 10:41:29.973799    4717 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/apiserver.crt.d629ac91 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0828 10:41:30.071317    4717 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/apiserver.crt.d629ac91 ...
	I0828 10:41:30.071335    4717 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/apiserver.crt.d629ac91: {Name:mk5decf942ff473ed05904e6bec266e199df58a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:41:30.071892    4717 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/apiserver.key.d629ac91 ...
	I0828 10:41:30.071902    4717 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/apiserver.key.d629ac91: {Name:mk61461cb5d4384e962aa64d28f518bdcf88010d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:41:30.072048    4717 certs.go:381] copying /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/apiserver.crt.d629ac91 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/apiserver.crt
	I0828 10:41:30.072200    4717 certs.go:385] copying /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/apiserver.key.d629ac91 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/apiserver.key
	I0828 10:41:30.072365    4717 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/proxy-client.key
	I0828 10:41:30.072506    4717 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/1678.pem (1338 bytes)
	W0828 10:41:30.072534    4717 certs.go:480] ignoring /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/1678_empty.pem, impossibly tiny 0 bytes
	I0828 10:41:30.072539    4717 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 10:41:30.072565    4717 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem (1078 bytes)
	I0828 10:41:30.072592    4717 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem (1123 bytes)
	I0828 10:41:30.072616    4717 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/key.pem (1679 bytes)
	I0828 10:41:30.072668    4717 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/files/etc/ssl/certs/16782.pem (1708 bytes)
	I0828 10:41:30.073042    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 10:41:30.079902    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 10:41:30.087325    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 10:41:30.095024    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 10:41:30.102367    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0828 10:41:30.109532    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0828 10:41:30.116412    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 10:41:30.123427    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 10:41:30.130780    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/1678.pem --> /usr/share/ca-certificates/1678.pem (1338 bytes)
	I0828 10:41:30.137902    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/files/etc/ssl/certs/16782.pem --> /usr/share/ca-certificates/16782.pem (1708 bytes)
	I0828 10:41:30.144473    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 10:41:30.151263    4717 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 10:41:30.156341    4717 ssh_runner.go:195] Run: openssl version
	I0828 10:41:30.158336    4717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 10:41:30.161247    4717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 10:41:30.162754    4717 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:51 /usr/share/ca-certificates/minikubeCA.pem
	I0828 10:41:30.162778    4717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 10:41:30.164540    4717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 10:41:30.167522    4717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1678.pem && ln -fs /usr/share/ca-certificates/1678.pem /etc/ssl/certs/1678.pem"
	I0828 10:41:30.170810    4717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1678.pem
	I0828 10:41:30.172244    4717 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:06 /usr/share/ca-certificates/1678.pem
	I0828 10:41:30.172266    4717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1678.pem
	I0828 10:41:30.173950    4717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1678.pem /etc/ssl/certs/51391683.0"
	I0828 10:41:30.176704    4717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16782.pem && ln -fs /usr/share/ca-certificates/16782.pem /etc/ssl/certs/16782.pem"
	I0828 10:41:30.179692    4717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16782.pem
	I0828 10:41:30.181170    4717 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:06 /usr/share/ca-certificates/16782.pem
	I0828 10:41:30.181195    4717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16782.pem
	I0828 10:41:30.182908    4717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16782.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 10:41:30.186151    4717 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 10:41:30.187501    4717 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 10:41:30.189345    4717 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 10:41:30.191084    4717 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 10:41:30.192900    4717 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 10:41:30.194644    4717 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 10:41:30.196492    4717 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 10:41:30.198213    4717 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-801000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-801000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0828 10:41:30.198289    4717 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0828 10:41:30.208756    4717 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 10:41:30.211948    4717 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 10:41:30.211955    4717 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 10:41:30.211995    4717 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 10:41:30.215910    4717 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 10:41:30.216229    4717 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-801000" does not appear in /Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:41:30.216329    4717 kubeconfig.go:62] /Users/jenkins/minikube-integration/19529-1176/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-801000" cluster setting kubeconfig missing "stopped-upgrade-801000" context setting]
	I0828 10:41:30.216526    4717 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/kubeconfig: {Name:mke8b729c65a2ae9e4d9042dc78e2127479f8609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:41:30.216990    4717 kapi.go:59] client config for stopped-upgrade-801000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/client.key", CAFile:"/Users/jenkins/minikube-integration/19529-1176/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106777eb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0828 10:41:30.217312    4717 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 10:41:30.220077    4717 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-801000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0828 10:41:30.220083    4717 kubeadm.go:1160] stopping kube-system containers ...
	I0828 10:41:30.220123    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0828 10:41:30.233447    4717 docker.go:483] Stopping containers: [57615586b5d3 3527382822a3 37d0386da62f f04951a7c514 d8ab8c596fcc 747a7191149c caabf38006b1 657511b584fb]
	I0828 10:41:30.233513    4717 ssh_runner.go:195] Run: docker stop 57615586b5d3 3527382822a3 37d0386da62f f04951a7c514 d8ab8c596fcc 747a7191149c caabf38006b1 657511b584fb
	I0828 10:41:30.243821    4717 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 10:41:30.249654    4717 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 10:41:30.252386    4717 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 10:41:30.252392    4717 kubeadm.go:157] found existing configuration files:
	
	I0828 10:41:30.252415    4717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf
	I0828 10:41:30.255241    4717 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 10:41:30.255270    4717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 10:41:30.258239    4717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf
	I0828 10:41:30.260764    4717 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 10:41:30.260785    4717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 10:41:30.263573    4717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf
	I0828 10:41:30.266834    4717 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 10:41:30.266859    4717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 10:41:30.269851    4717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf
	I0828 10:41:30.272184    4717 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 10:41:30.272207    4717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 10:41:30.275155    4717 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 10:41:30.278069    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 10:41:30.300117    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 10:41:28.491726    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:28.491796    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:41:28.503826    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:41:28.503879    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:41:28.515288    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:41:28.515387    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:41:28.528213    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:41:28.528274    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:41:28.543883    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:41:28.543926    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:41:28.555794    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:41:28.555851    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:41:28.567304    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:41:28.567353    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:41:28.579056    4578 logs.go:276] 0 containers: []
	W0828 10:41:28.579071    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:41:28.579139    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:41:28.590234    4578 logs.go:276] 0 containers: []
	W0828 10:41:28.590247    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:41:28.590259    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:41:28.590265    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:41:28.629714    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:41:28.629725    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:41:28.651092    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:41:28.651109    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:41:28.668203    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:41:28.668224    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:41:28.682393    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:41:28.682406    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:41:28.721072    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:41:28.721086    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:41:28.739295    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:41:28.739305    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:41:28.751375    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:41:28.751388    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:41:28.765100    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:41:28.765114    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:41:28.779169    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:41:28.779180    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:41:28.789990    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:41:28.790001    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:41:28.804856    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:41:28.804866    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:41:28.816797    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:41:28.816810    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:41:28.834860    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:41:28.834871    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:41:28.858178    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:41:28.858189    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:41:31.362554    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:30.829241    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 10:41:30.960304    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 10:41:30.994260    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 10:41:31.013773    4717 api_server.go:52] waiting for apiserver process to appear ...
	I0828 10:41:31.013865    4717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 10:41:31.515903    4717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 10:41:32.015884    4717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 10:41:32.515871    4717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 10:41:32.520124    4717 api_server.go:72] duration metric: took 1.506407875s to wait for apiserver process to appear ...
	I0828 10:41:32.520135    4717 api_server.go:88] waiting for apiserver healthz status ...
	I0828 10:41:32.520145    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:36.363021    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:36.363131    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:41:36.373937    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:41:36.374005    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:41:36.384347    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:41:36.384421    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:41:36.395535    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:41:36.395605    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:41:36.406094    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:41:36.406157    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:41:36.416713    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:41:36.416779    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:41:36.427373    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:41:36.427447    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:41:36.441161    4578 logs.go:276] 0 containers: []
	W0828 10:41:36.441173    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:41:36.441229    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:41:36.453358    4578 logs.go:276] 0 containers: []
	W0828 10:41:36.453374    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:41:36.453382    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:41:36.453389    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:41:36.490236    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:41:36.490248    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:41:36.524783    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:41:36.524797    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:41:36.542086    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:41:36.542098    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:41:36.561753    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:41:36.561764    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:41:36.575961    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:41:36.575972    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:41:36.591733    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:41:36.591744    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:41:36.607912    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:41:36.607921    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:41:36.630371    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:41:36.630380    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:41:36.634441    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:41:36.634450    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:41:37.522049    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:37.522140    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:36.646720    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:41:36.646730    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:41:36.660468    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:41:36.660478    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:41:36.671772    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:41:36.671783    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:41:36.687004    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:41:36.687015    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:41:36.699669    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:41:36.699679    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:41:39.213114    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:42.522264    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:42.522337    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:44.215693    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:44.216137    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:41:44.251796    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:41:44.251943    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:41:44.273705    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:41:44.273807    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:41:44.290956    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:41:44.291040    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:41:44.304276    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:41:44.304352    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:41:44.314564    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:41:44.314633    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:41:44.325368    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:41:44.325439    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:41:44.336961    4578 logs.go:276] 0 containers: []
	W0828 10:41:44.336981    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:41:44.337043    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:41:44.347313    4578 logs.go:276] 0 containers: []
	W0828 10:41:44.347324    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:41:44.347333    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:41:44.347340    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:41:44.361033    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:41:44.361051    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:41:44.384399    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:41:44.384409    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:41:44.419840    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:41:44.419846    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:41:44.433467    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:41:44.433482    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:41:44.448305    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:41:44.448319    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:41:44.465752    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:41:44.465766    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:41:44.478156    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:41:44.478167    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:41:44.490567    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:41:44.490582    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:41:44.502439    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:41:44.502452    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:41:44.506739    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:41:44.506747    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:41:44.540433    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:41:44.540449    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:41:44.561331    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:41:44.561345    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:41:44.575646    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:41:44.575658    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:41:44.591540    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:41:44.591551    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:41:47.522967    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:47.522990    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:47.111880    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:52.523354    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:52.523414    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:52.114334    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:52.114455    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:41:52.125418    4578 logs.go:276] 2 containers: [05bd8745a507 ea763b575572]
	I0828 10:41:52.125489    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:41:52.136538    4578 logs.go:276] 2 containers: [a1ceba175e70 e931fd3528ca]
	I0828 10:41:52.136650    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:41:52.148622    4578 logs.go:276] 1 containers: [98b08b3a9d5b]
	I0828 10:41:52.148697    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:41:52.161141    4578 logs.go:276] 2 containers: [39b902a8061a 344d6faf3784]
	I0828 10:41:52.161212    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:41:52.173088    4578 logs.go:276] 1 containers: [ec049927c0c0]
	I0828 10:41:52.173159    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:41:52.185530    4578 logs.go:276] 2 containers: [6cd64b1f8867 52b00da325a7]
	I0828 10:41:52.185618    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:41:52.196738    4578 logs.go:276] 0 containers: []
	W0828 10:41:52.196750    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:41:52.196810    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:41:52.208291    4578 logs.go:276] 0 containers: []
	W0828 10:41:52.208303    4578 logs.go:278] No container was found matching "storage-provisioner"
	I0828 10:41:52.208312    4578 logs.go:123] Gathering logs for kube-scheduler [39b902a8061a] ...
	I0828 10:41:52.208318    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39b902a8061a"
	I0828 10:41:52.227092    4578 logs.go:123] Gathering logs for kube-proxy [ec049927c0c0] ...
	I0828 10:41:52.227112    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec049927c0c0"
	I0828 10:41:52.240654    4578 logs.go:123] Gathering logs for kube-apiserver [ea763b575572] ...
	I0828 10:41:52.240667    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea763b575572"
	I0828 10:41:52.262115    4578 logs.go:123] Gathering logs for etcd [e931fd3528ca] ...
	I0828 10:41:52.262128    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e931fd3528ca"
	I0828 10:41:52.278824    4578 logs.go:123] Gathering logs for kube-controller-manager [6cd64b1f8867] ...
	I0828 10:41:52.278849    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd64b1f8867"
	I0828 10:41:52.297604    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:41:52.297616    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:41:52.323062    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:41:52.323082    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:41:52.366297    4578 logs.go:123] Gathering logs for kube-apiserver [05bd8745a507] ...
	I0828 10:41:52.366319    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05bd8745a507"
	I0828 10:41:52.381872    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:41:52.381885    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:41:52.386685    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:41:52.386704    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:41:52.425015    4578 logs.go:123] Gathering logs for etcd [a1ceba175e70] ...
	I0828 10:41:52.425026    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ceba175e70"
	I0828 10:41:52.439231    4578 logs.go:123] Gathering logs for coredns [98b08b3a9d5b] ...
	I0828 10:41:52.439243    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98b08b3a9d5b"
	I0828 10:41:52.452158    4578 logs.go:123] Gathering logs for kube-scheduler [344d6faf3784] ...
	I0828 10:41:52.452174    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344d6faf3784"
	I0828 10:41:52.468810    4578 logs.go:123] Gathering logs for kube-controller-manager [52b00da325a7] ...
	I0828 10:41:52.468821    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52b00da325a7"
	I0828 10:41:52.481912    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:41:52.481924    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:41:54.995597    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:57.523894    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:57.523934    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:59.997857    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:59.997997    4578 kubeadm.go:597] duration metric: took 4m4.023250167s to restartPrimaryControlPlane
	W0828 10:41:59.998127    4578 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0828 10:41:59.998182    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0828 10:42:00.934267    4578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 10:42:00.939202    4578 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 10:42:00.941903    4578 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 10:42:00.944878    4578 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 10:42:00.944884    4578 kubeadm.go:157] found existing configuration files:
	
	I0828 10:42:00.944908    4578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/admin.conf
	I0828 10:42:00.947471    4578 kubeadm.go:163] "https://control-plane.minikube.internal:50293" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 10:42:00.947495    4578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 10:42:00.949909    4578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/kubelet.conf
	I0828 10:42:00.952795    4578 kubeadm.go:163] "https://control-plane.minikube.internal:50293" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 10:42:00.952816    4578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 10:42:00.956253    4578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/controller-manager.conf
	I0828 10:42:00.958858    4578 kubeadm.go:163] "https://control-plane.minikube.internal:50293" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 10:42:00.958880    4578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 10:42:00.961345    4578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/scheduler.conf
	I0828 10:42:00.964331    4578 kubeadm.go:163] "https://control-plane.minikube.internal:50293" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50293 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 10:42:00.964355    4578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 10:42:00.967091    4578 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 10:42:00.983587    4578 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0828 10:42:00.983617    4578 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 10:42:01.031333    4578 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 10:42:01.031391    4578 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 10:42:01.031467    4578 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0828 10:42:01.081066    4578 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 10:42:01.085336    4578 out.go:235]   - Generating certificates and keys ...
	I0828 10:42:01.085374    4578 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 10:42:01.085411    4578 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 10:42:01.085450    4578 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 10:42:01.085490    4578 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 10:42:01.085527    4578 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 10:42:01.085564    4578 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 10:42:01.085600    4578 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 10:42:01.085630    4578 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 10:42:01.085667    4578 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 10:42:01.085704    4578 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 10:42:01.085726    4578 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 10:42:01.085754    4578 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 10:42:01.168636    4578 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 10:42:01.267805    4578 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 10:42:01.412586    4578 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 10:42:01.672865    4578 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 10:42:01.703386    4578 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 10:42:01.703689    4578 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 10:42:01.703728    4578 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 10:42:01.790637    4578 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 10:42:02.524596    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:02.524620    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:01.793569    4578 out.go:235]   - Booting up control plane ...
	I0828 10:42:01.793616    4578 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 10:42:01.793657    4578 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 10:42:01.793688    4578 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 10:42:01.793748    4578 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 10:42:01.793830    4578 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0828 10:42:06.294727    4578 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502091 seconds
	I0828 10:42:06.294843    4578 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0828 10:42:06.300804    4578 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0828 10:42:06.818976    4578 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0828 10:42:06.819358    4578 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-717000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0828 10:42:07.324945    4578 kubeadm.go:310] [bootstrap-token] Using token: gikppl.stuh2yrx4blizjqe
	I0828 10:42:07.330907    4578 out.go:235]   - Configuring RBAC rules ...
	I0828 10:42:07.330988    4578 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0828 10:42:07.331046    4578 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0828 10:42:07.336375    4578 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0828 10:42:07.337539    4578 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0828 10:42:07.338614    4578 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0828 10:42:07.339590    4578 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0828 10:42:07.343355    4578 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0828 10:42:07.517083    4578 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0828 10:42:07.729450    4578 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0828 10:42:07.729915    4578 kubeadm.go:310] 
	I0828 10:42:07.729947    4578 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0828 10:42:07.729954    4578 kubeadm.go:310] 
	I0828 10:42:07.729994    4578 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0828 10:42:07.729997    4578 kubeadm.go:310] 
	I0828 10:42:07.730009    4578 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0828 10:42:07.730041    4578 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0828 10:42:07.730194    4578 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0828 10:42:07.730202    4578 kubeadm.go:310] 
	I0828 10:42:07.730234    4578 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0828 10:42:07.730237    4578 kubeadm.go:310] 
	I0828 10:42:07.730259    4578 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0828 10:42:07.730262    4578 kubeadm.go:310] 
	I0828 10:42:07.730311    4578 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0828 10:42:07.730347    4578 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0828 10:42:07.730408    4578 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0828 10:42:07.730415    4578 kubeadm.go:310] 
	I0828 10:42:07.730461    4578 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0828 10:42:07.730505    4578 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0828 10:42:07.730509    4578 kubeadm.go:310] 
	I0828 10:42:07.730553    4578 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token gikppl.stuh2yrx4blizjqe \
	I0828 10:42:07.730606    4578 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5b3c4c1f8e59fd4c25ce08db6b17ec7ac98ea4455ff93445c7a91221249d86a1 \
	I0828 10:42:07.730619    4578 kubeadm.go:310] 	--control-plane 
	I0828 10:42:07.730624    4578 kubeadm.go:310] 
	I0828 10:42:07.730665    4578 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0828 10:42:07.730667    4578 kubeadm.go:310] 
	I0828 10:42:07.730708    4578 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token gikppl.stuh2yrx4blizjqe \
	I0828 10:42:07.730773    4578 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5b3c4c1f8e59fd4c25ce08db6b17ec7ac98ea4455ff93445c7a91221249d86a1 
	I0828 10:42:07.730827    4578 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 10:42:07.730841    4578 cni.go:84] Creating CNI manager for ""
	I0828 10:42:07.730849    4578 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:42:07.733853    4578 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 10:42:07.737833    4578 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 10:42:07.740730    4578 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 10:42:07.745478    4578 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 10:42:07.745522    4578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 10:42:07.745580    4578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-717000 minikube.k8s.io/updated_at=2024_08_28T10_42_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=running-upgrade-717000 minikube.k8s.io/primary=true
	I0828 10:42:07.792270    4578 ops.go:34] apiserver oom_adj: -16
	I0828 10:42:07.792301    4578 kubeadm.go:1113] duration metric: took 46.810875ms to wait for elevateKubeSystemPrivileges
	I0828 10:42:07.794640    4578 kubeadm.go:394] duration metric: took 4m11.834239916s to StartCluster
	I0828 10:42:07.794655    4578 settings.go:142] acquiring lock: {Name:mk584f5f183a19e050e7184c0c9e70ea26430337 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:42:07.794743    4578 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:42:07.795098    4578 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/kubeconfig: {Name:mke8b729c65a2ae9e4d9042dc78e2127479f8609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:42:07.795292    4578 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:42:07.795394    4578 config.go:182] Loaded profile config "running-upgrade-717000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0828 10:42:07.795327    4578 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 10:42:07.795457    4578 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-717000"
	I0828 10:42:07.795470    4578 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-717000"
	W0828 10:42:07.795475    4578 addons.go:243] addon storage-provisioner should already be in state true
	I0828 10:42:07.795486    4578 host.go:66] Checking if "running-upgrade-717000" exists ...
	I0828 10:42:07.795498    4578 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-717000"
	I0828 10:42:07.795510    4578 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-717000"
	I0828 10:42:07.795730    4578 retry.go:31] will retry after 1.306894239s: connect: dial unix /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/running-upgrade-717000/monitor: connect: connection refused
	I0828 10:42:07.796409    4578 kapi.go:59] client config for running-upgrade-717000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/running-upgrade-717000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/running-upgrade-717000/client.key", CAFile:"/Users/jenkins/minikube-integration/19529-1176/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104683eb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0828 10:42:07.796542    4578 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-717000"
	W0828 10:42:07.796546    4578 addons.go:243] addon default-storageclass should already be in state true
	I0828 10:42:07.796554    4578 host.go:66] Checking if "running-upgrade-717000" exists ...
	I0828 10:42:07.797086    4578 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 10:42:07.797092    4578 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 10:42:07.797097    4578 sshutil.go:53] new ssh client: &{IP:localhost Port:50261 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/running-upgrade-717000/id_rsa Username:docker}
	I0828 10:42:07.799846    4578 out.go:177] * Verifying Kubernetes components...
	I0828 10:42:07.806769    4578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:42:07.894016    4578 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 10:42:07.898874    4578 api_server.go:52] waiting for apiserver process to appear ...
	I0828 10:42:07.898913    4578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 10:42:07.902809    4578 api_server.go:72] duration metric: took 107.509834ms to wait for apiserver process to appear ...
	I0828 10:42:07.902817    4578 api_server.go:88] waiting for apiserver healthz status ...
	I0828 10:42:07.902824    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:07.973423    4578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 10:42:08.264849    4578 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0828 10:42:08.264861    4578 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0828 10:42:09.111496    4578 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 10:42:07.526253    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:07.526282    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:09.115514    4578 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 10:42:09.115532    4578 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 10:42:09.115549    4578 sshutil.go:53] new ssh client: &{IP:localhost Port:50261 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/running-upgrade-717000/id_rsa Username:docker}
	I0828 10:42:09.176496    4578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 10:42:12.527808    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:12.527877    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:12.904815    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:12.904881    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:17.530276    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:17.530335    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:17.905177    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:17.905217    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:22.532511    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:22.532560    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:22.905495    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:22.905545    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:27.534736    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:27.534786    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:27.905988    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:27.906074    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:32.536999    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:32.537261    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:42:32.563346    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:42:32.563442    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:42:32.578340    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:42:32.578417    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:42:32.590643    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:42:32.590716    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:42:32.602699    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:42:32.602767    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:42:32.617464    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:42:32.617548    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:42:32.628036    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:42:32.628103    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:42:32.639199    4717 logs.go:276] 0 containers: []
	W0828 10:42:32.639210    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:42:32.639271    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:42:32.649841    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:42:32.649869    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:42:32.649874    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:42:32.688198    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:42:32.688209    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:42:32.771459    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:42:32.771474    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:42:32.783856    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:42:32.783876    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:42:32.798243    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:42:32.798257    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:42:32.809085    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:42:32.809095    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:42:32.820457    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:42:32.820469    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:42:32.862971    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:42:32.862982    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:42:32.874412    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:42:32.874423    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:42:32.889594    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:42:32.889605    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:42:32.906729    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:42:32.906740    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:42:32.918649    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:42:32.918665    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:42:32.931166    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:42:32.931178    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:42:32.935810    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:42:32.935819    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:42:32.950037    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:42:32.950047    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:42:32.967206    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:42:32.967217    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:42:35.494966    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:32.906740    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:32.906756    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:37.907696    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:37.907790    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0828 10:42:38.266341    4578 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0828 10:42:38.270118    4578 out.go:177] * Enabled addons: storage-provisioner
	I0828 10:42:40.497321    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:40.497663    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:42:40.529740    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:42:40.529867    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:42:40.552364    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:42:40.552449    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:42:40.566771    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:42:40.566859    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:42:40.578205    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:42:40.578284    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:42:40.592075    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:42:40.592145    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:42:40.603429    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:42:40.603491    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:42:40.613361    4717 logs.go:276] 0 containers: []
	W0828 10:42:40.613371    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:42:40.613428    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:42:40.624016    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:42:40.624037    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:42:40.624043    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:42:40.638744    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:42:40.638755    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:42:40.650634    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:42:40.650648    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:42:40.687167    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:42:40.687174    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:42:40.706409    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:42:40.706420    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:42:40.718428    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:42:40.718439    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:42:40.730010    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:42:40.730021    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:42:40.773799    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:42:40.773811    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:42:40.791138    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:42:40.791149    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:42:40.806654    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:42:40.806666    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:42:38.279903    4578 addons.go:510] duration metric: took 30.485683709s for enable addons: enabled=[storage-provisioner]
	I0828 10:42:40.831308    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:42:40.831319    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:42:40.847452    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:42:40.847463    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:42:40.851563    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:42:40.851570    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:42:40.889020    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:42:40.889030    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:42:40.903307    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:42:40.903322    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:42:40.917924    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:42:40.917933    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:42:43.432274    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:42.909606    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:42.909657    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:48.434745    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:48.434976    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:42:48.454880    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:42:48.454976    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:42:48.473350    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:42:48.473426    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:42:48.484505    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:42:48.484584    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:42:48.494950    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:42:48.495024    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:42:48.509552    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:42:48.509619    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:42:48.520210    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:42:48.520275    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:42:48.531540    4717 logs.go:276] 0 containers: []
	W0828 10:42:48.531552    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:42:48.531609    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:42:48.542208    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:42:48.542227    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:42:48.542233    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:42:48.579108    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:42:48.579116    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:42:48.636100    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:42:48.636113    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:42:48.656463    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:42:48.656476    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:42:48.667089    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:42:48.667100    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:42:48.682430    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:42:48.682441    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:42:48.694748    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:42:48.694762    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:42:48.706782    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:42:48.706794    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:42:48.724983    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:42:48.724995    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:42:48.736666    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:42:48.736682    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:42:48.740879    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:42:48.740884    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:42:48.756180    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:42:48.756192    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:42:48.794753    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:42:48.794764    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:42:48.806442    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:42:48.806452    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:42:48.824048    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:42:48.824063    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:42:48.849416    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:42:48.849429    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:42:47.911371    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:47.911464    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:51.369440    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:52.913974    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:52.914030    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:56.371653    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:56.371858    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:42:56.404075    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:42:56.404224    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:42:56.422922    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:42:56.423012    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:42:56.437482    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:42:56.437557    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:42:56.449454    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:42:56.449527    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:42:56.460690    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:42:56.460763    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:42:56.472847    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:42:56.472917    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:42:56.483573    4717 logs.go:276] 0 containers: []
	W0828 10:42:56.483588    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:42:56.483646    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:42:56.494531    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:42:56.494547    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:42:56.494553    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:42:56.532672    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:42:56.532686    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:42:56.550143    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:42:56.550153    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:42:56.562065    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:42:56.562076    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:42:56.586323    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:42:56.586333    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:42:56.621012    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:42:56.621025    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:42:56.635637    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:42:56.635649    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:42:56.647113    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:42:56.647124    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:42:56.665080    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:42:56.665091    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:42:56.685857    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:42:56.685872    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:42:56.697245    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:42:56.697255    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:42:56.712814    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:42:56.712826    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:42:56.724608    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:42:56.724620    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:42:56.762496    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:42:56.762505    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:42:56.766808    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:42:56.766815    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:42:56.781266    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:42:56.781278    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:42:59.297478    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:57.915414    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:57.915442    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:04.299775    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:04.300124    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:43:04.334051    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:43:04.334182    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:43:04.351652    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:43:04.351737    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:43:04.365372    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:43:04.365452    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:43:04.377221    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:43:04.377284    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:43:04.387829    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:43:04.387894    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:43:04.398855    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:43:04.398923    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:43:04.408926    4717 logs.go:276] 0 containers: []
	W0828 10:43:04.408941    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:43:04.409001    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:43:04.419378    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:43:04.419394    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:43:04.419399    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:43:04.458010    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:43:04.458020    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:43:04.469911    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:43:04.469922    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:43:04.484485    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:43:04.484495    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:43:04.496761    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:43:04.496772    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:43:04.508686    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:43:04.508697    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:43:04.533942    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:43:04.533954    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:43:04.546936    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:43:04.546948    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:43:04.562036    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:43:04.562048    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:43:04.576118    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:43:04.576128    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:43:04.587605    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:43:04.587616    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:43:04.592264    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:43:04.592271    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:43:04.630818    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:43:04.630830    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:43:04.642654    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:43:04.642664    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:43:04.657571    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:43:04.657584    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:43:04.675988    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:43:04.676001    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:43:02.917577    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:02.917663    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:07.216485    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:07.920188    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:07.920487    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:43:07.960381    4578 logs.go:276] 1 containers: [d751e569ea31]
	I0828 10:43:07.960482    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:43:07.983797    4578 logs.go:276] 1 containers: [f3ab42a808f3]
	I0828 10:43:07.983939    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:43:07.996184    4578 logs.go:276] 2 containers: [e251198522b1 f352e786668a]
	I0828 10:43:07.996258    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:43:08.007348    4578 logs.go:276] 1 containers: [d378c1964053]
	I0828 10:43:08.007414    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:43:08.018059    4578 logs.go:276] 1 containers: [927c8d8912e6]
	I0828 10:43:08.018137    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:43:08.031552    4578 logs.go:276] 1 containers: [6b81eae0040a]
	I0828 10:43:08.031631    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:43:08.044478    4578 logs.go:276] 0 containers: []
	W0828 10:43:08.044488    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:43:08.044551    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:43:08.055449    4578 logs.go:276] 1 containers: [ed2f4076ae8f]
	I0828 10:43:08.055464    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:43:08.055470    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:43:08.061065    4578 logs.go:123] Gathering logs for etcd [f3ab42a808f3] ...
	I0828 10:43:08.061072    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ab42a808f3"
	I0828 10:43:08.075321    4578 logs.go:123] Gathering logs for coredns [e251198522b1] ...
	I0828 10:43:08.075332    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e251198522b1"
	I0828 10:43:08.087424    4578 logs.go:123] Gathering logs for coredns [f352e786668a] ...
	I0828 10:43:08.087438    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f352e786668a"
	I0828 10:43:08.099293    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:43:08.099306    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:43:08.122608    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:43:08.122615    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 10:43:08.154021    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:43:08.154119    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:43:08.155402    4578 logs.go:123] Gathering logs for kube-apiserver [d751e569ea31] ...
	I0828 10:43:08.155410    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d751e569ea31"
	I0828 10:43:08.170842    4578 logs.go:123] Gathering logs for kube-scheduler [d378c1964053] ...
	I0828 10:43:08.170853    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d378c1964053"
	I0828 10:43:08.191659    4578 logs.go:123] Gathering logs for kube-proxy [927c8d8912e6] ...
	I0828 10:43:08.191670    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927c8d8912e6"
	I0828 10:43:08.203525    4578 logs.go:123] Gathering logs for kube-controller-manager [6b81eae0040a] ...
	I0828 10:43:08.203540    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b81eae0040a"
	I0828 10:43:08.221425    4578 logs.go:123] Gathering logs for storage-provisioner [ed2f4076ae8f] ...
	I0828 10:43:08.221436    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed2f4076ae8f"
	I0828 10:43:08.232802    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:43:08.232811    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:43:08.244029    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:43:08.244040    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:43:08.278669    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:43:08.278685    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 10:43:08.278712    4578 out.go:270] X Problems detected in kubelet:
	W0828 10:43:08.278717    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:43:08.278721    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:43:08.278733    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:43:08.278735    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:43:12.218982    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:12.219450    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:43:12.256975    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:43:12.257109    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:43:12.278329    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:43:12.278428    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:43:12.293143    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:43:12.293223    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:43:12.305227    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:43:12.305305    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:43:12.316334    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:43:12.316395    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:43:12.327261    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:43:12.327332    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:43:12.337642    4717 logs.go:276] 0 containers: []
	W0828 10:43:12.337653    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:43:12.337707    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:43:12.353192    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:43:12.353209    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:43:12.353215    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:43:12.367468    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:43:12.367478    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:43:12.382268    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:43:12.382279    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:43:12.396772    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:43:12.396782    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:43:12.408311    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:43:12.408322    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:43:12.425624    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:43:12.425634    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:43:12.437321    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:43:12.437332    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:43:12.448691    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:43:12.448705    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:43:12.484442    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:43:12.484451    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:43:12.495588    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:43:12.495598    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:43:12.533560    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:43:12.533573    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:43:12.547260    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:43:12.547271    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:43:12.561543    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:43:12.561554    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:43:12.565735    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:43:12.565743    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:43:12.600280    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:43:12.600297    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:43:12.613789    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:43:12.613805    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:43:15.139466    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:20.141597    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:20.141740    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:43:20.157266    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:43:20.157340    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:43:20.167458    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:43:20.167530    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:43:20.183289    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:43:20.183354    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:43:20.193591    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:43:20.193667    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:43:20.203997    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:43:20.204066    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:43:20.215057    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:43:20.215126    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:43:20.225416    4717 logs.go:276] 0 containers: []
	W0828 10:43:20.225426    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:43:20.225480    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:43:20.235425    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:43:20.235443    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:43:20.235448    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:43:20.246153    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:43:20.246167    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:43:20.257750    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:43:20.257759    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:43:20.269492    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:43:20.269504    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:43:20.305899    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:43:20.305907    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:43:20.343086    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:43:20.343100    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:43:20.357257    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:43:20.357268    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:43:20.368561    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:43:20.368574    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:43:20.372496    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:43:20.372504    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:43:20.393937    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:43:20.393950    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:43:20.405570    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:43:20.405581    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:43:20.423337    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:43:20.423351    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:43:20.435461    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:43:20.435472    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:43:20.459058    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:43:20.459066    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:43:20.495699    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:43:20.495708    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:43:20.514201    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:43:20.514211    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:43:18.281712    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:23.037415    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:23.283978    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:23.284157    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:43:23.304523    4578 logs.go:276] 1 containers: [d751e569ea31]
	I0828 10:43:23.304621    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:43:23.319612    4578 logs.go:276] 1 containers: [f3ab42a808f3]
	I0828 10:43:23.319690    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:43:23.332571    4578 logs.go:276] 2 containers: [e251198522b1 f352e786668a]
	I0828 10:43:23.332647    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:43:23.343413    4578 logs.go:276] 1 containers: [d378c1964053]
	I0828 10:43:23.343479    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:43:23.354408    4578 logs.go:276] 1 containers: [927c8d8912e6]
	I0828 10:43:23.354479    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:43:23.365048    4578 logs.go:276] 1 containers: [6b81eae0040a]
	I0828 10:43:23.365113    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:43:23.375126    4578 logs.go:276] 0 containers: []
	W0828 10:43:23.375135    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:43:23.375186    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:43:23.385807    4578 logs.go:276] 1 containers: [ed2f4076ae8f]
	I0828 10:43:23.385822    4578 logs.go:123] Gathering logs for kube-apiserver [d751e569ea31] ...
	I0828 10:43:23.385828    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d751e569ea31"
	I0828 10:43:23.400059    4578 logs.go:123] Gathering logs for etcd [f3ab42a808f3] ...
	I0828 10:43:23.400072    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ab42a808f3"
	I0828 10:43:23.413660    4578 logs.go:123] Gathering logs for coredns [f352e786668a] ...
	I0828 10:43:23.413671    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f352e786668a"
	I0828 10:43:23.425438    4578 logs.go:123] Gathering logs for kube-scheduler [d378c1964053] ...
	I0828 10:43:23.425448    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d378c1964053"
	I0828 10:43:23.440496    4578 logs.go:123] Gathering logs for kube-proxy [927c8d8912e6] ...
	I0828 10:43:23.440508    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927c8d8912e6"
	I0828 10:43:23.453135    4578 logs.go:123] Gathering logs for storage-provisioner [ed2f4076ae8f] ...
	I0828 10:43:23.453148    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed2f4076ae8f"
	I0828 10:43:23.469927    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:43:23.469936    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 10:43:23.501525    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:43:23.501625    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:43:23.502866    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:43:23.502870    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:43:23.507452    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:43:23.507460    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:43:23.542934    4578 logs.go:123] Gathering logs for coredns [e251198522b1] ...
	I0828 10:43:23.542947    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e251198522b1"
	I0828 10:43:23.554843    4578 logs.go:123] Gathering logs for kube-controller-manager [6b81eae0040a] ...
	I0828 10:43:23.554857    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b81eae0040a"
	I0828 10:43:23.576656    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:43:23.576667    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:43:23.601502    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:43:23.601514    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:43:23.612948    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:43:23.612959    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 10:43:23.612986    4578 out.go:270] X Problems detected in kubelet:
	W0828 10:43:23.612991    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:43:23.612994    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:43:23.613023    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:43:23.613026    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:43:28.039831    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:28.040068    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:43:28.059913    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:43:28.060005    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:43:28.074605    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:43:28.074683    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:43:28.091285    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:43:28.091354    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:43:28.101751    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:43:28.101823    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:43:28.112797    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:43:28.112868    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:43:28.124205    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:43:28.124278    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:43:28.134876    4717 logs.go:276] 0 containers: []
	W0828 10:43:28.134888    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:43:28.134947    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:43:28.146334    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:43:28.146354    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:43:28.146360    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:43:28.189673    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:43:28.189688    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:43:28.204926    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:43:28.204937    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:43:28.216537    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:43:28.216550    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:43:28.227925    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:43:28.227935    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:43:28.252235    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:43:28.252243    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:43:28.288485    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:43:28.288494    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:43:28.302551    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:43:28.302564    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:43:28.313549    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:43:28.313561    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:43:28.317781    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:43:28.317790    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:43:28.329786    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:43:28.329797    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:43:28.341075    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:43:28.341085    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:43:28.355514    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:43:28.355523    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:43:28.393186    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:43:28.393196    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:43:28.407001    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:43:28.407011    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:43:28.418620    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:43:28.418631    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:43:30.938714    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:33.616829    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:35.940508    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:35.940698    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:43:35.967120    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:43:35.967216    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:43:35.981183    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:43:35.981265    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:43:35.997039    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:43:35.997110    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:43:36.007451    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:43:36.007515    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:43:36.018214    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:43:36.018271    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:43:36.031823    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:43:36.031894    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:43:36.042080    4717 logs.go:276] 0 containers: []
	W0828 10:43:36.042092    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:43:36.042150    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:43:36.052901    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:43:36.052919    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:43:36.052926    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:43:36.064709    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:43:36.064720    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:43:36.070380    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:43:36.070388    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:43:36.108506    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:43:36.108517    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:43:36.122443    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:43:36.122458    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:43:36.147724    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:43:36.147738    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:43:36.167293    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:43:36.167305    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:43:36.192112    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:43:36.192123    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:43:36.203335    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:43:36.203344    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:43:36.215398    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:43:36.215410    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:43:36.238834    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:43:36.238842    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:43:36.253007    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:43:36.253020    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:43:36.269802    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:43:36.269814    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:43:36.306069    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:43:36.306077    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:43:36.340871    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:43:36.340881    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:43:36.352879    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:43:36.352890    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:43:38.864982    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:38.619179    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:38.619525    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:43:38.652869    4578 logs.go:276] 1 containers: [d751e569ea31]
	I0828 10:43:38.653001    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:43:38.671150    4578 logs.go:276] 1 containers: [f3ab42a808f3]
	I0828 10:43:38.671244    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:43:38.685240    4578 logs.go:276] 2 containers: [e251198522b1 f352e786668a]
	I0828 10:43:38.685319    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:43:38.697013    4578 logs.go:276] 1 containers: [d378c1964053]
	I0828 10:43:38.697087    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:43:38.709153    4578 logs.go:276] 1 containers: [927c8d8912e6]
	I0828 10:43:38.709221    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:43:38.719219    4578 logs.go:276] 1 containers: [6b81eae0040a]
	I0828 10:43:38.719284    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:43:38.729998    4578 logs.go:276] 0 containers: []
	W0828 10:43:38.730010    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:43:38.730076    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:43:38.740455    4578 logs.go:276] 1 containers: [ed2f4076ae8f]
	I0828 10:43:38.740469    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:43:38.740475    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 10:43:38.772680    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:43:38.772777    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:43:38.774009    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:43:38.774015    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:43:38.810471    4578 logs.go:123] Gathering logs for etcd [f3ab42a808f3] ...
	I0828 10:43:38.810484    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ab42a808f3"
	I0828 10:43:38.824990    4578 logs.go:123] Gathering logs for coredns [e251198522b1] ...
	I0828 10:43:38.825001    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e251198522b1"
	I0828 10:43:38.836687    4578 logs.go:123] Gathering logs for kube-scheduler [d378c1964053] ...
	I0828 10:43:38.836697    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d378c1964053"
	I0828 10:43:38.853293    4578 logs.go:123] Gathering logs for storage-provisioner [ed2f4076ae8f] ...
	I0828 10:43:38.853308    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed2f4076ae8f"
	I0828 10:43:38.865167    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:43:38.865178    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:43:38.877572    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:43:38.877585    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:43:38.882329    4578 logs.go:123] Gathering logs for kube-apiserver [d751e569ea31] ...
	I0828 10:43:38.882337    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d751e569ea31"
	I0828 10:43:38.905275    4578 logs.go:123] Gathering logs for coredns [f352e786668a] ...
	I0828 10:43:38.905288    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f352e786668a"
	I0828 10:43:38.916862    4578 logs.go:123] Gathering logs for kube-proxy [927c8d8912e6] ...
	I0828 10:43:38.916872    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927c8d8912e6"
	I0828 10:43:38.928554    4578 logs.go:123] Gathering logs for kube-controller-manager [6b81eae0040a] ...
	I0828 10:43:38.928564    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b81eae0040a"
	I0828 10:43:38.946609    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:43:38.946619    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:43:38.971602    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:43:38.971612    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 10:43:38.971638    4578 out.go:270] X Problems detected in kubelet:
	W0828 10:43:38.971643    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:43:38.971648    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:43:38.971672    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:43:38.971689    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:43:43.867114    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:43.867530    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:43:43.909269    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:43:43.909409    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:43:43.930495    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:43:43.930594    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:43:43.946688    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:43:43.946767    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:43:43.964341    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:43:43.964422    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:43:43.974972    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:43:43.975037    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:43:43.985478    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:43:43.985550    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:43:43.996290    4717 logs.go:276] 0 containers: []
	W0828 10:43:43.996302    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:43:43.996361    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:43:44.007115    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:43:44.007135    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:43:44.007141    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:43:44.046840    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:43:44.046852    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:43:44.063789    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:43:44.063799    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:43:44.075440    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:43:44.075454    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:43:44.092930    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:43:44.092941    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:43:44.097084    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:43:44.097090    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:43:44.135586    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:43:44.135597    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:43:44.147157    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:43:44.147169    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:43:44.171209    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:43:44.171221    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:43:44.182886    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:43:44.182901    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:43:44.194646    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:43:44.194658    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:43:44.210202    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:43:44.210228    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:43:44.223543    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:43:44.223557    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:43:44.240939    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:43:44.240950    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:43:44.258563    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:43:44.258577    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:43:44.297331    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:43:44.297341    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:43:46.813622    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:48.975521    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:51.815864    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:51.816082    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:43:51.836214    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:43:51.836312    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:43:51.850780    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:43:51.850866    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:43:51.863045    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:43:51.863121    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:43:51.874369    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:43:51.874443    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:43:51.884557    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:43:51.884620    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:43:51.894808    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:43:51.894876    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:43:51.905641    4717 logs.go:276] 0 containers: []
	W0828 10:43:51.905653    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:43:51.905710    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:43:51.917143    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:43:51.917160    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:43:51.917165    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:43:51.931563    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:43:51.931574    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:43:51.943728    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:43:51.943739    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:43:51.967380    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:43:51.967391    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:43:51.971790    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:43:51.971797    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:43:52.009840    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:43:52.009851    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:43:52.020735    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:43:52.020745    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:43:52.037499    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:43:52.037513    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:43:52.056299    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:43:52.056310    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:43:52.074591    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:43:52.074602    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:43:52.086431    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:43:52.086443    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:43:52.122794    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:43:52.122802    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:43:52.158345    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:43:52.158358    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:43:52.172574    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:43:52.172584    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:43:52.191311    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:43:52.191325    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:43:52.202354    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:43:52.202364    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:43:54.714351    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:53.978035    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:53.978246    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:43:54.002389    4578 logs.go:276] 1 containers: [d751e569ea31]
	I0828 10:43:54.002489    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:43:54.017807    4578 logs.go:276] 1 containers: [f3ab42a808f3]
	I0828 10:43:54.017886    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:43:54.030069    4578 logs.go:276] 2 containers: [e251198522b1 f352e786668a]
	I0828 10:43:54.030142    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:43:54.046707    4578 logs.go:276] 1 containers: [d378c1964053]
	I0828 10:43:54.046778    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:43:54.061967    4578 logs.go:276] 1 containers: [927c8d8912e6]
	I0828 10:43:54.062041    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:43:54.080041    4578 logs.go:276] 1 containers: [6b81eae0040a]
	I0828 10:43:54.080108    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:43:54.090371    4578 logs.go:276] 0 containers: []
	W0828 10:43:54.090382    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:43:54.090442    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:43:54.100800    4578 logs.go:276] 1 containers: [ed2f4076ae8f]
	I0828 10:43:54.100816    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:43:54.100821    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:43:54.105242    4578 logs.go:123] Gathering logs for coredns [f352e786668a] ...
	I0828 10:43:54.105251    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f352e786668a"
	I0828 10:43:54.118903    4578 logs.go:123] Gathering logs for kube-proxy [927c8d8912e6] ...
	I0828 10:43:54.118916    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927c8d8912e6"
	I0828 10:43:54.130743    4578 logs.go:123] Gathering logs for kube-controller-manager [6b81eae0040a] ...
	I0828 10:43:54.130754    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b81eae0040a"
	I0828 10:43:54.148889    4578 logs.go:123] Gathering logs for storage-provisioner [ed2f4076ae8f] ...
	I0828 10:43:54.148898    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed2f4076ae8f"
	I0828 10:43:54.160745    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:43:54.160756    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:43:54.172886    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:43:54.172899    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 10:43:54.204802    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:43:54.204899    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:43:54.206133    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:43:54.206137    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:43:54.242549    4578 logs.go:123] Gathering logs for kube-apiserver [d751e569ea31] ...
	I0828 10:43:54.242560    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d751e569ea31"
	I0828 10:43:54.257085    4578 logs.go:123] Gathering logs for etcd [f3ab42a808f3] ...
	I0828 10:43:54.257096    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ab42a808f3"
	I0828 10:43:54.274648    4578 logs.go:123] Gathering logs for coredns [e251198522b1] ...
	I0828 10:43:54.274667    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e251198522b1"
	I0828 10:43:54.289675    4578 logs.go:123] Gathering logs for kube-scheduler [d378c1964053] ...
	I0828 10:43:54.289690    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d378c1964053"
	I0828 10:43:54.305219    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:43:54.305233    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:43:54.330393    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:43:54.330402    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 10:43:54.330429    4578 out.go:270] X Problems detected in kubelet:
	W0828 10:43:54.330434    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:43:54.330438    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:43:54.330441    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:43:54.330444    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:43:59.716557    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:59.716806    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:43:59.738457    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:43:59.738559    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:43:59.756097    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:43:59.756172    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:43:59.767847    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:43:59.767917    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:43:59.784678    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:43:59.784747    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:43:59.795469    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:43:59.795537    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:43:59.806006    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:43:59.806076    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:43:59.816698    4717 logs.go:276] 0 containers: []
	W0828 10:43:59.816710    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:43:59.816765    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:43:59.827602    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:43:59.827621    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:43:59.827627    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:43:59.861183    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:43:59.861194    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:43:59.875315    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:43:59.875327    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:43:59.886518    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:43:59.886527    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:43:59.923424    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:43:59.923436    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:43:59.962451    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:43:59.962469    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:43:59.974685    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:43:59.974700    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:43:59.989298    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:43:59.989312    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:44:00.003694    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:44:00.003704    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:44:00.015852    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:44:00.015864    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:44:00.031071    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:44:00.031083    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:44:00.048884    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:44:00.048894    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:44:00.060706    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:44:00.060716    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:44:00.084476    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:44:00.084496    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:44:00.088908    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:44:00.088916    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:44:00.102867    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:44:00.102878    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:44:02.616804    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:44:04.334367    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:44:07.618155    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:44:07.618270    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:44:07.630363    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:44:07.630445    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:44:07.640795    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:44:07.640858    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:44:07.650751    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:44:07.650820    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:44:07.661075    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:44:07.661146    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:44:07.671578    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:44:07.671641    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:44:07.682388    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:44:07.682459    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:44:07.692630    4717 logs.go:276] 0 containers: []
	W0828 10:44:07.692640    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:44:07.692695    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:44:07.703507    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:44:07.703529    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:44:07.703535    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:44:07.717521    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:44:07.717531    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:44:07.728662    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:44:07.728672    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:44:07.744571    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:44:07.744582    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:44:07.778274    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:44:07.778286    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:44:07.792948    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:44:07.792961    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:44:07.816058    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:44:07.816075    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:44:07.828370    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:44:07.828384    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:44:07.840082    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:44:07.840092    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:44:07.882474    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:44:07.882486    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:44:07.896200    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:44:07.896213    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:44:07.914997    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:44:07.915009    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:44:07.951775    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:44:07.951785    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:44:07.970988    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:44:07.970999    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:44:07.982283    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:44:07.982294    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:44:07.993753    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:44:07.993768    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:44:10.500652    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:44:09.337610    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:44:09.338038    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:44:09.376906    4578 logs.go:276] 1 containers: [d751e569ea31]
	I0828 10:44:09.377049    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:44:09.399448    4578 logs.go:276] 1 containers: [f3ab42a808f3]
	I0828 10:44:09.399541    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:44:09.421128    4578 logs.go:276] 2 containers: [e251198522b1 f352e786668a]
	I0828 10:44:09.421202    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:44:09.432973    4578 logs.go:276] 1 containers: [d378c1964053]
	I0828 10:44:09.433044    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:44:09.444127    4578 logs.go:276] 1 containers: [927c8d8912e6]
	I0828 10:44:09.444198    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:44:09.455193    4578 logs.go:276] 1 containers: [6b81eae0040a]
	I0828 10:44:09.455260    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:44:09.465236    4578 logs.go:276] 0 containers: []
	W0828 10:44:09.465249    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:44:09.465301    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:44:09.476140    4578 logs.go:276] 1 containers: [ed2f4076ae8f]
	I0828 10:44:09.476158    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:44:09.476163    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 10:44:09.508425    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:44:09.508524    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:44:09.509818    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:44:09.509823    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:44:09.514081    4578 logs.go:123] Gathering logs for coredns [e251198522b1] ...
	I0828 10:44:09.514090    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e251198522b1"
	I0828 10:44:09.525708    4578 logs.go:123] Gathering logs for coredns [f352e786668a] ...
	I0828 10:44:09.525719    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f352e786668a"
	I0828 10:44:09.547268    4578 logs.go:123] Gathering logs for kube-proxy [927c8d8912e6] ...
	I0828 10:44:09.547283    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927c8d8912e6"
	I0828 10:44:09.559477    4578 logs.go:123] Gathering logs for storage-provisioner [ed2f4076ae8f] ...
	I0828 10:44:09.559487    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed2f4076ae8f"
	I0828 10:44:09.571465    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:44:09.571475    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:44:09.583132    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:44:09.583147    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:44:09.619970    4578 logs.go:123] Gathering logs for kube-apiserver [d751e569ea31] ...
	I0828 10:44:09.619980    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d751e569ea31"
	I0828 10:44:09.634980    4578 logs.go:123] Gathering logs for etcd [f3ab42a808f3] ...
	I0828 10:44:09.634990    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ab42a808f3"
	I0828 10:44:09.648936    4578 logs.go:123] Gathering logs for kube-scheduler [d378c1964053] ...
	I0828 10:44:09.648947    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d378c1964053"
	I0828 10:44:09.664557    4578 logs.go:123] Gathering logs for kube-controller-manager [6b81eae0040a] ...
	I0828 10:44:09.664568    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b81eae0040a"
	I0828 10:44:09.682295    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:44:09.682305    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:44:09.707191    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:44:09.707199    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 10:44:09.707222    4578 out.go:270] X Problems detected in kubelet:
	W0828 10:44:09.707227    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:44:09.707230    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:44:09.707234    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:44:09.707281    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:44:15.504839    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:44:15.504964    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:44:15.519400    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:44:15.519474    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:44:15.531001    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:44:15.531061    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:44:15.541653    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:44:15.541713    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:44:15.552259    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:44:15.552336    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:44:15.562824    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:44:15.562894    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:44:15.574192    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:44:15.574265    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:44:15.585222    4717 logs.go:276] 0 containers: []
	W0828 10:44:15.585235    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:44:15.585298    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:44:15.596336    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:44:15.596354    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:44:15.596360    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:44:15.610821    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:44:15.610832    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:44:15.622803    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:44:15.622814    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:44:15.634717    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:44:15.634730    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:44:15.639399    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:44:15.639407    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:44:15.682398    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:44:15.682408    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:44:15.694525    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:44:15.694535    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:44:15.706721    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:44:15.706731    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:44:15.745079    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:44:15.745089    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:44:15.783985    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:44:15.783995    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:44:15.798309    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:44:15.798319    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:44:15.809377    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:44:15.809388    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:44:15.821156    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:44:15.821166    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:44:15.839807    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:44:15.839821    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:44:15.864060    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:44:15.864078    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:44:15.878573    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:44:15.878588    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:44:18.396215    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:44:19.712901    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:44:23.400098    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:44:23.400284    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:44:23.415624    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:44:23.415704    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:44:23.429472    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:44:23.429549    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:44:23.440297    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:44:23.440361    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:44:23.451442    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:44:23.451515    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:44:23.462420    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:44:23.462489    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:44:23.473411    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:44:23.473475    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:44:23.484165    4717 logs.go:276] 0 containers: []
	W0828 10:44:23.484176    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:44:23.484233    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:44:23.494248    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:44:23.494268    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:44:23.494274    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:44:23.506643    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:44:23.506654    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:44:23.518693    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:44:23.518707    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:44:23.536649    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:44:23.536660    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:44:23.560766    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:44:23.560776    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:44:23.595587    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:44:23.595600    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:44:23.611320    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:44:23.611332    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:44:23.622780    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:44:23.622792    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:44:23.659848    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:44:23.659856    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:44:23.698226    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:44:23.698237    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:44:23.711758    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:44:23.711769    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:44:23.723014    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:44:23.723025    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:44:23.727390    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:44:23.727397    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:44:23.744207    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:44:23.744217    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:44:23.760014    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:44:23.760025    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:44:23.774098    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:44:23.774112    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:44:24.716195    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:44:24.716577    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:44:24.751657    4578 logs.go:276] 1 containers: [d751e569ea31]
	I0828 10:44:24.751796    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:44:24.770762    4578 logs.go:276] 1 containers: [f3ab42a808f3]
	I0828 10:44:24.770853    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:44:24.788814    4578 logs.go:276] 4 containers: [d2115075a059 6ddcad2204e5 e251198522b1 f352e786668a]
	I0828 10:44:24.788890    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:44:24.800868    4578 logs.go:276] 1 containers: [d378c1964053]
	I0828 10:44:24.800948    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:44:24.811429    4578 logs.go:276] 1 containers: [927c8d8912e6]
	I0828 10:44:24.811501    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:44:24.821719    4578 logs.go:276] 1 containers: [6b81eae0040a]
	I0828 10:44:24.821789    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:44:24.832229    4578 logs.go:276] 0 containers: []
	W0828 10:44:24.832241    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:44:24.832297    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:44:24.842592    4578 logs.go:276] 1 containers: [ed2f4076ae8f]
	I0828 10:44:24.842608    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:44:24.842612    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:44:24.877636    4578 logs.go:123] Gathering logs for coredns [e251198522b1] ...
	I0828 10:44:24.877650    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e251198522b1"
	I0828 10:44:24.889839    4578 logs.go:123] Gathering logs for storage-provisioner [ed2f4076ae8f] ...
	I0828 10:44:24.889853    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed2f4076ae8f"
	I0828 10:44:24.901831    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:44:24.901844    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:44:24.926805    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:44:24.926814    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:44:24.939163    4578 logs.go:123] Gathering logs for kube-apiserver [d751e569ea31] ...
	I0828 10:44:24.939174    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d751e569ea31"
	I0828 10:44:24.953843    4578 logs.go:123] Gathering logs for etcd [f3ab42a808f3] ...
	I0828 10:44:24.953856    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ab42a808f3"
	I0828 10:44:24.968609    4578 logs.go:123] Gathering logs for coredns [d2115075a059] ...
	I0828 10:44:24.968620    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2115075a059"
	I0828 10:44:24.980408    4578 logs.go:123] Gathering logs for coredns [f352e786668a] ...
	I0828 10:44:24.980419    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f352e786668a"
	I0828 10:44:24.992680    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:44:24.992691    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:44:24.997900    4578 logs.go:123] Gathering logs for coredns [6ddcad2204e5] ...
	I0828 10:44:24.997906    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddcad2204e5"
	I0828 10:44:25.009497    4578 logs.go:123] Gathering logs for kube-proxy [927c8d8912e6] ...
	I0828 10:44:25.009507    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927c8d8912e6"
	I0828 10:44:25.020944    4578 logs.go:123] Gathering logs for kube-controller-manager [6b81eae0040a] ...
	I0828 10:44:25.020952    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b81eae0040a"
	I0828 10:44:25.040327    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:44:25.040337    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 10:44:25.074378    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:44:25.074477    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:44:25.075770    4578 logs.go:123] Gathering logs for kube-scheduler [d378c1964053] ...
	I0828 10:44:25.075775    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d378c1964053"
	I0828 10:44:25.097414    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:44:25.097426    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 10:44:25.097453    4578 out.go:270] X Problems detected in kubelet:
	W0828 10:44:25.097458    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:44:25.097461    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:44:25.097465    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:44:25.097468    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:44:26.294175    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:44:31.297159    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:44:31.297414    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:44:31.327875    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:44:31.327963    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:44:31.343324    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:44:31.343402    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:44:31.360665    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:44:31.360744    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:44:31.371377    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:44:31.371449    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:44:31.381655    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:44:31.381719    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:44:31.391852    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:44:31.391916    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:44:31.402089    4717 logs.go:276] 0 containers: []
	W0828 10:44:31.402101    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:44:31.402157    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:44:31.412431    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:44:31.412449    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:44:31.412454    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:44:31.435755    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:44:31.435763    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:44:31.449657    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:44:31.449667    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:44:31.465707    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:44:31.465718    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:44:31.477131    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:44:31.477141    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:44:31.493619    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:44:31.493632    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:44:31.497590    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:44:31.497599    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:44:31.516109    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:44:31.516120    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:44:31.531115    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:44:31.531128    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:44:31.542712    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:44:31.542722    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:44:31.554370    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:44:31.554381    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:44:31.592957    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:44:31.592970    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:44:31.633573    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:44:31.633587    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:44:31.645757    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:44:31.645769    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:44:31.667161    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:44:31.667171    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:44:31.701538    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:44:31.701553    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:44:34.213383    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:44:35.102729    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:44:39.215990    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:44:39.216353    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:44:39.248595    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:44:39.248732    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:44:39.267531    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:44:39.267621    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:44:39.281833    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:44:39.281910    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:44:39.293655    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:44:39.293729    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:44:39.304701    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:44:39.304767    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:44:39.315182    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:44:39.315246    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:44:39.325740    4717 logs.go:276] 0 containers: []
	W0828 10:44:39.325753    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:44:39.325825    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:44:39.338150    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:44:39.338167    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:44:39.338172    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:44:39.349598    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:44:39.349608    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:44:39.361843    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:44:39.361853    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:44:39.373292    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:44:39.373301    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:44:39.409890    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:44:39.409908    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:44:39.414143    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:44:39.414152    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:44:39.450248    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:44:39.450261    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:44:39.464439    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:44:39.464451    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:44:39.478355    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:44:39.478368    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:44:39.492907    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:44:39.492918    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:44:39.504672    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:44:39.504682    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:44:39.525378    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:44:39.525389    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:44:39.548074    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:44:39.548082    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:44:39.586359    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:44:39.586373    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:44:39.601678    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:44:39.601689    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:44:39.613535    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:44:39.613546    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:44:40.105352    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:44:40.105533    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:44:40.120420    4578 logs.go:276] 1 containers: [d751e569ea31]
	I0828 10:44:40.120506    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:44:40.133425    4578 logs.go:276] 1 containers: [f3ab42a808f3]
	I0828 10:44:40.133504    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:44:40.144126    4578 logs.go:276] 4 containers: [d2115075a059 6ddcad2204e5 e251198522b1 f352e786668a]
	I0828 10:44:40.144206    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:44:40.155316    4578 logs.go:276] 1 containers: [d378c1964053]
	I0828 10:44:40.155390    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:44:40.166176    4578 logs.go:276] 1 containers: [927c8d8912e6]
	I0828 10:44:40.166250    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:44:40.177090    4578 logs.go:276] 1 containers: [6b81eae0040a]
	I0828 10:44:40.177168    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:44:40.187303    4578 logs.go:276] 0 containers: []
	W0828 10:44:40.187315    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:44:40.187370    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:44:40.205442    4578 logs.go:276] 1 containers: [ed2f4076ae8f]
	I0828 10:44:40.205460    4578 logs.go:123] Gathering logs for etcd [f3ab42a808f3] ...
	I0828 10:44:40.205466    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ab42a808f3"
	I0828 10:44:40.219199    4578 logs.go:123] Gathering logs for coredns [d2115075a059] ...
	I0828 10:44:40.219211    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2115075a059"
	I0828 10:44:40.230867    4578 logs.go:123] Gathering logs for coredns [f352e786668a] ...
	I0828 10:44:40.230878    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f352e786668a"
	I0828 10:44:40.243132    4578 logs.go:123] Gathering logs for kube-controller-manager [6b81eae0040a] ...
	I0828 10:44:40.243148    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b81eae0040a"
	I0828 10:44:40.260661    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:44:40.260672    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:44:40.264942    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:44:40.264948    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:44:40.298787    4578 logs.go:123] Gathering logs for storage-provisioner [ed2f4076ae8f] ...
	I0828 10:44:40.298798    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed2f4076ae8f"
	I0828 10:44:40.318274    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:44:40.318286    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:44:40.343041    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:44:40.343050    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:44:40.354859    4578 logs.go:123] Gathering logs for kube-apiserver [d751e569ea31] ...
	I0828 10:44:40.354873    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d751e569ea31"
	I0828 10:44:40.369008    4578 logs.go:123] Gathering logs for kube-proxy [927c8d8912e6] ...
	I0828 10:44:40.369021    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927c8d8912e6"
	I0828 10:44:40.381767    4578 logs.go:123] Gathering logs for kube-scheduler [d378c1964053] ...
	I0828 10:44:40.381780    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d378c1964053"
	I0828 10:44:40.397138    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:44:40.397149    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 10:44:40.428718    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:44:40.428815    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:44:40.430091    4578 logs.go:123] Gathering logs for coredns [6ddcad2204e5] ...
	I0828 10:44:40.430096    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddcad2204e5"
	I0828 10:44:40.450130    4578 logs.go:123] Gathering logs for coredns [e251198522b1] ...
	I0828 10:44:40.450141    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e251198522b1"
	I0828 10:44:40.461680    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:44:40.461690    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 10:44:40.461716    4578 out.go:270] X Problems detected in kubelet:
	W0828 10:44:40.461723    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:44:40.461727    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:44:40.461731    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:44:40.461734    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:44:42.127131    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:44:47.129666    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:44:47.129826    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:44:47.149409    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:44:47.149492    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:44:47.160406    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:44:47.160479    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:44:47.170922    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:44:47.170997    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:44:47.182283    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:44:47.182350    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:44:47.192516    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:44:47.192577    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:44:47.203370    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:44:47.203444    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:44:47.221007    4717 logs.go:276] 0 containers: []
	W0828 10:44:47.221017    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:44:47.221069    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:44:47.237766    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:44:47.237790    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:44:47.237796    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:44:47.251525    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:44:47.251535    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:44:47.268647    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:44:47.268657    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:44:47.280499    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:44:47.280510    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:44:47.294771    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:44:47.294781    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:44:47.332459    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:44:47.332471    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:44:47.347255    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:44:47.347265    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:44:47.359135    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:44:47.359145    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:44:47.383463    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:44:47.383472    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:44:47.397388    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:44:47.397398    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:44:47.431611    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:44:47.431622    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:44:47.443527    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:44:47.443538    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:44:47.465412    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:44:47.465421    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:44:47.476915    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:44:47.476926    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:44:47.488670    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:44:47.488681    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:44:47.492934    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:44:47.492943    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:44:50.031804    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:44:50.466071    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:44:55.033976    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:44:55.034206    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:44:55.054084    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:44:55.054175    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:44:55.068801    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:44:55.068883    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:44:55.080795    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:44:55.080859    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:44:55.092036    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:44:55.092116    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:44:55.102253    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:44:55.102321    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:44:55.112795    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:44:55.112867    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:44:55.122799    4717 logs.go:276] 0 containers: []
	W0828 10:44:55.122811    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:44:55.122867    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:44:55.132763    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:44:55.132778    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:44:55.132783    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:44:55.169411    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:44:55.169419    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:44:55.181233    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:44:55.181246    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:44:55.218693    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:44:55.218703    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:44:55.232151    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:44:55.232161    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:44:55.245321    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:44:55.245332    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:44:55.260283    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:44:55.260294    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:44:55.279399    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:44:55.279410    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:44:55.292387    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:44:55.292398    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:44:55.306899    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:44:55.306915    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:44:55.321040    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:44:55.321054    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:44:55.325207    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:44:55.325215    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:44:55.362770    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:44:55.362781    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:44:55.374092    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:44:55.374106    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:44:55.385601    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:44:55.385612    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:44:55.396620    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:44:55.396629    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:44:55.468397    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:44:55.468498    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:44:55.482127    4578 logs.go:276] 1 containers: [d751e569ea31]
	I0828 10:44:55.482195    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:44:55.492652    4578 logs.go:276] 1 containers: [f3ab42a808f3]
	I0828 10:44:55.492726    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:44:55.503637    4578 logs.go:276] 4 containers: [d2115075a059 6ddcad2204e5 e251198522b1 f352e786668a]
	I0828 10:44:55.503709    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:44:55.521576    4578 logs.go:276] 1 containers: [d378c1964053]
	I0828 10:44:55.521647    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:44:55.532102    4578 logs.go:276] 1 containers: [927c8d8912e6]
	I0828 10:44:55.532168    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:44:55.542713    4578 logs.go:276] 1 containers: [6b81eae0040a]
	I0828 10:44:55.542777    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:44:55.553026    4578 logs.go:276] 0 containers: []
	W0828 10:44:55.553039    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:44:55.553098    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:44:55.563862    4578 logs.go:276] 1 containers: [ed2f4076ae8f]
	I0828 10:44:55.563878    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:44:55.563883    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 10:44:55.596614    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:44:55.596716    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:44:55.598036    4578 logs.go:123] Gathering logs for kube-apiserver [d751e569ea31] ...
	I0828 10:44:55.598044    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d751e569ea31"
	I0828 10:44:55.612326    4578 logs.go:123] Gathering logs for etcd [f3ab42a808f3] ...
	I0828 10:44:55.612336    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ab42a808f3"
	I0828 10:44:55.633402    4578 logs.go:123] Gathering logs for coredns [6ddcad2204e5] ...
	I0828 10:44:55.633412    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddcad2204e5"
	I0828 10:44:55.644809    4578 logs.go:123] Gathering logs for coredns [e251198522b1] ...
	I0828 10:44:55.644823    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e251198522b1"
	I0828 10:44:55.656487    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:44:55.656498    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:44:55.661226    4578 logs.go:123] Gathering logs for kube-proxy [927c8d8912e6] ...
	I0828 10:44:55.661233    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927c8d8912e6"
	I0828 10:44:55.682100    4578 logs.go:123] Gathering logs for storage-provisioner [ed2f4076ae8f] ...
	I0828 10:44:55.682111    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed2f4076ae8f"
	I0828 10:44:55.693246    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:44:55.693259    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:44:55.718586    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:44:55.718595    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:44:55.731198    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:44:55.731209    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:44:55.766669    4578 logs.go:123] Gathering logs for coredns [d2115075a059] ...
	I0828 10:44:55.766679    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2115075a059"
	I0828 10:44:55.779175    4578 logs.go:123] Gathering logs for coredns [f352e786668a] ...
	I0828 10:44:55.779186    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f352e786668a"
	I0828 10:44:55.797028    4578 logs.go:123] Gathering logs for kube-scheduler [d378c1964053] ...
	I0828 10:44:55.797039    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d378c1964053"
	I0828 10:44:55.828228    4578 logs.go:123] Gathering logs for kube-controller-manager [6b81eae0040a] ...
	I0828 10:44:55.828238    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b81eae0040a"
	I0828 10:44:55.846480    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:44:55.846490    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 10:44:55.846517    4578 out.go:270] X Problems detected in kubelet:
	W0828 10:44:55.846521    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:44:55.846524    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:44:55.846527    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:44:55.846530    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:44:57.921016    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:02.923332    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:45:02.923579    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:45:02.949815    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:45:02.949944    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:45:02.968460    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:45:02.968537    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:45:02.981866    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:45:02.981933    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:45:02.993288    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:45:02.993363    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:45:03.003596    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:45:03.003660    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:45:03.018174    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:45:03.018249    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:45:03.028044    4717 logs.go:276] 0 containers: []
	W0828 10:45:03.028053    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:45:03.028105    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:45:03.040002    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:45:03.040019    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:45:03.040024    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:45:03.074824    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:45:03.074835    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:45:03.086116    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:45:03.086129    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:45:03.098016    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:45:03.098028    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:45:03.109821    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:45:03.109833    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:45:03.121566    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:45:03.121580    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:45:03.157696    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:45:03.157704    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:45:03.161490    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:45:03.161498    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:45:03.202929    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:45:03.202940    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:45:03.214422    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:45:03.214436    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:45:03.232498    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:45:03.232509    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:45:03.248506    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:45:03.248518    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:45:03.267289    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:45:03.267300    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:45:03.279125    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:45:03.279135    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:45:03.293194    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:45:03.293205    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:45:03.310123    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:45:03.310136    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:45:05.850561    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:05.833908    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:10.852874    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:45:10.853123    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:45:10.878507    4578 logs.go:276] 1 containers: [d751e569ea31]
	I0828 10:45:10.878605    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:45:10.897398    4578 logs.go:276] 1 containers: [f3ab42a808f3]
	I0828 10:45:10.897476    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:45:10.913584    4578 logs.go:276] 4 containers: [d2115075a059 6ddcad2204e5 e251198522b1 f352e786668a]
	I0828 10:45:10.913666    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:45:10.925007    4578 logs.go:276] 1 containers: [d378c1964053]
	I0828 10:45:10.925077    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:45:10.937555    4578 logs.go:276] 1 containers: [927c8d8912e6]
	I0828 10:45:10.937611    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:45:10.949554    4578 logs.go:276] 1 containers: [6b81eae0040a]
	I0828 10:45:10.949617    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:45:10.971766    4578 logs.go:276] 0 containers: []
	W0828 10:45:10.971779    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:45:10.971841    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:45:10.983349    4578 logs.go:276] 1 containers: [ed2f4076ae8f]
	I0828 10:45:10.983367    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:45:10.983372    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:45:10.988439    4578 logs.go:123] Gathering logs for etcd [f3ab42a808f3] ...
	I0828 10:45:10.988450    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ab42a808f3"
	I0828 10:45:11.003932    4578 logs.go:123] Gathering logs for coredns [6ddcad2204e5] ...
	I0828 10:45:11.003943    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddcad2204e5"
	I0828 10:45:11.018591    4578 logs.go:123] Gathering logs for kube-controller-manager [6b81eae0040a] ...
	I0828 10:45:11.018602    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b81eae0040a"
	I0828 10:45:11.036880    4578 logs.go:123] Gathering logs for kube-apiserver [d751e569ea31] ...
	I0828 10:45:11.036890    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d751e569ea31"
	I0828 10:45:11.062886    4578 logs.go:123] Gathering logs for coredns [f352e786668a] ...
	I0828 10:45:11.062900    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f352e786668a"
	I0828 10:45:11.076441    4578 logs.go:123] Gathering logs for kube-scheduler [d378c1964053] ...
	I0828 10:45:11.076453    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d378c1964053"
	I0828 10:45:11.092897    4578 logs.go:123] Gathering logs for storage-provisioner [ed2f4076ae8f] ...
	I0828 10:45:11.092912    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed2f4076ae8f"
	I0828 10:45:11.105160    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:45:11.105173    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:45:11.131177    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:45:11.131191    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 10:45:11.167064    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:45:11.167169    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:45:11.168506    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:45:11.168516    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:45:11.206199    4578 logs.go:123] Gathering logs for coredns [d2115075a059] ...
	I0828 10:45:11.206210    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2115075a059"
	I0828 10:45:11.223005    4578 logs.go:123] Gathering logs for coredns [e251198522b1] ...
	I0828 10:45:11.223017    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e251198522b1"
	I0828 10:45:11.235363    4578 logs.go:123] Gathering logs for kube-proxy [927c8d8912e6] ...
	I0828 10:45:11.235375    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927c8d8912e6"
	I0828 10:45:11.247630    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:45:11.247642    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:45:11.260115    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:45:11.260127    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 10:45:11.260156    4578 out.go:270] X Problems detected in kubelet:
	W0828 10:45:11.260161    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:45:11.260166    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:45:11.260170    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:45:11.260174    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:45:10.834926    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:45:10.835245    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:45:10.869860    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:45:10.869971    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:45:10.889909    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:45:10.890001    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:45:10.905778    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:45:10.905860    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:45:10.921992    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:45:10.922067    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:45:10.935484    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:45:10.935558    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:45:10.947433    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:45:10.947511    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:45:10.962683    4717 logs.go:276] 0 containers: []
	W0828 10:45:10.962694    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:45:10.962752    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:45:10.977195    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:45:10.977213    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:45:10.977220    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:45:10.995419    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:45:10.995431    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:45:11.012113    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:45:11.012130    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:45:11.031148    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:45:11.031164    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:45:11.072055    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:45:11.072076    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:45:11.113006    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:45:11.113020    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:45:11.132892    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:45:11.132901    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:45:11.147627    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:45:11.147638    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:45:11.170908    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:45:11.170917    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:45:11.175800    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:45:11.175811    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:45:11.191067    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:45:11.191084    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:45:11.231337    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:45:11.231350    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:45:11.250169    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:45:11.250182    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:45:11.263073    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:45:11.263082    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:45:11.275295    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:45:11.275306    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:45:11.289580    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:45:11.289597    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:45:13.805012    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:18.807664    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:45:18.807970    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:45:18.838763    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:45:18.838865    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:45:18.855914    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:45:18.856004    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:45:18.869796    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:45:18.869868    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:45:18.881082    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:45:18.881162    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:45:18.891502    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:45:18.891571    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:45:18.902118    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:45:18.902187    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:45:18.912487    4717 logs.go:276] 0 containers: []
	W0828 10:45:18.912500    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:45:18.912556    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:45:18.935839    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:45:18.935858    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:45:18.935863    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:45:18.965119    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:45:18.965134    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:45:19.008715    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:45:19.008728    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:45:19.020765    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:45:19.020775    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:45:19.042530    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:45:19.042537    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:45:19.053858    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:45:19.053875    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:45:19.093072    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:45:19.093081    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:45:19.130851    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:45:19.130862    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:45:19.145271    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:45:19.145281    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:45:19.156844    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:45:19.156856    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:45:19.171445    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:45:19.171455    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:45:19.183090    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:45:19.183099    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:45:19.200089    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:45:19.200100    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:45:19.212327    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:45:19.212339    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:45:19.227565    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:45:19.227576    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:45:19.232346    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:45:19.232353    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:45:21.262840    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:21.746945    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:26.265010    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:45:26.265234    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:45:26.285017    4578 logs.go:276] 1 containers: [d751e569ea31]
	I0828 10:45:26.285101    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:45:26.299222    4578 logs.go:276] 1 containers: [f3ab42a808f3]
	I0828 10:45:26.299297    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:45:26.311112    4578 logs.go:276] 4 containers: [d2115075a059 6ddcad2204e5 e251198522b1 f352e786668a]
	I0828 10:45:26.311195    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:45:26.326371    4578 logs.go:276] 1 containers: [d378c1964053]
	I0828 10:45:26.326432    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:45:26.337506    4578 logs.go:276] 1 containers: [927c8d8912e6]
	I0828 10:45:26.337575    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:45:26.348429    4578 logs.go:276] 1 containers: [6b81eae0040a]
	I0828 10:45:26.348484    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:45:26.358680    4578 logs.go:276] 0 containers: []
	W0828 10:45:26.358692    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:45:26.358745    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:45:26.369106    4578 logs.go:276] 1 containers: [ed2f4076ae8f]
	I0828 10:45:26.369127    4578 logs.go:123] Gathering logs for coredns [f352e786668a] ...
	I0828 10:45:26.369134    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f352e786668a"
	I0828 10:45:26.381193    4578 logs.go:123] Gathering logs for etcd [f3ab42a808f3] ...
	I0828 10:45:26.381204    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ab42a808f3"
	I0828 10:45:26.394677    4578 logs.go:123] Gathering logs for coredns [d2115075a059] ...
	I0828 10:45:26.394687    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2115075a059"
	I0828 10:45:26.406018    4578 logs.go:123] Gathering logs for storage-provisioner [ed2f4076ae8f] ...
	I0828 10:45:26.406029    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed2f4076ae8f"
	I0828 10:45:26.417215    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:45:26.417226    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:45:26.441747    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:45:26.441756    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:45:26.453205    4578 logs.go:123] Gathering logs for kube-apiserver [d751e569ea31] ...
	I0828 10:45:26.453216    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d751e569ea31"
	I0828 10:45:26.469075    4578 logs.go:123] Gathering logs for coredns [6ddcad2204e5] ...
	I0828 10:45:26.469085    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddcad2204e5"
	I0828 10:45:26.486470    4578 logs.go:123] Gathering logs for kube-proxy [927c8d8912e6] ...
	I0828 10:45:26.486480    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927c8d8912e6"
	I0828 10:45:26.498594    4578 logs.go:123] Gathering logs for kube-controller-manager [6b81eae0040a] ...
	I0828 10:45:26.498606    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b81eae0040a"
	I0828 10:45:26.515756    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:45:26.515767    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:45:26.520582    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:45:26.520588    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:45:26.556033    4578 logs.go:123] Gathering logs for coredns [e251198522b1] ...
	I0828 10:45:26.556044    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e251198522b1"
	I0828 10:45:26.567833    4578 logs.go:123] Gathering logs for kube-scheduler [d378c1964053] ...
	I0828 10:45:26.567845    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d378c1964053"
	I0828 10:45:26.585800    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:45:26.585809    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 10:45:26.618318    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:45:26.618421    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:45:26.619755    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:45:26.619765    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 10:45:26.619795    4578 out.go:270] X Problems detected in kubelet:
	W0828 10:45:26.619801    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:45:26.619805    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:45:26.619849    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:45:26.619878    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:45:26.749071    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:45:26.749195    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:45:26.768218    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:45:26.768301    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:45:26.779788    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:45:26.779876    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:45:26.790783    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:45:26.790873    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:45:26.801570    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:45:26.801652    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:45:26.812226    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:45:26.812306    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:45:26.823186    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:45:26.823265    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:45:26.834190    4717 logs.go:276] 0 containers: []
	W0828 10:45:26.834201    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:45:26.834256    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:45:26.845330    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:45:26.845346    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:45:26.845352    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:45:26.892931    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:45:26.892943    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:45:26.907552    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:45:26.907563    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:45:26.946502    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:45:26.946515    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:45:26.958152    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:45:26.958163    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:45:26.974587    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:45:26.974601    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:45:26.992262    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:45:26.992274    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:45:27.029480    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:45:27.029499    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:45:27.033814    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:45:27.033821    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:45:27.045276    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:45:27.045289    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:45:27.057705    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:45:27.057717    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:45:27.071998    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:45:27.072012    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:45:27.083770    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:45:27.083782    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:45:27.098650    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:45:27.098661    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:45:27.122234    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:45:27.122241    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:45:27.136508    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:45:27.136519    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:45:29.650547    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:34.652947    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:45:34.653035    4717 kubeadm.go:597] duration metric: took 4m4.439352542s to restartPrimaryControlPlane
	W0828 10:45:34.653121    4717 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0828 10:45:34.653161    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0828 10:45:35.686519    4717 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.033377667s)
	I0828 10:45:35.686908    4717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 10:45:35.691868    4717 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 10:45:35.694712    4717 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 10:45:35.697426    4717 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 10:45:35.697431    4717 kubeadm.go:157] found existing configuration files:
	
	I0828 10:45:35.697452    4717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf
	I0828 10:45:35.699816    4717 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 10:45:35.699842    4717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 10:45:35.702376    4717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf
	I0828 10:45:35.704832    4717 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 10:45:35.704849    4717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 10:45:35.707517    4717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf
	I0828 10:45:35.710668    4717 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 10:45:35.710690    4717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 10:45:35.713625    4717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf
	I0828 10:45:35.716124    4717 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 10:45:35.716142    4717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 10:45:35.719300    4717 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 10:45:35.736782    4717 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0828 10:45:35.736856    4717 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 10:45:35.784917    4717 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 10:45:35.784977    4717 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 10:45:35.785034    4717 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0828 10:45:35.835732    4717 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 10:45:35.844324    4717 out.go:235]   - Generating certificates and keys ...
	I0828 10:45:35.844357    4717 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 10:45:35.844382    4717 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 10:45:35.844414    4717 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 10:45:35.844442    4717 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 10:45:35.844476    4717 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 10:45:35.844508    4717 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 10:45:35.844545    4717 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 10:45:35.844579    4717 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 10:45:35.844617    4717 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 10:45:35.844657    4717 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 10:45:35.844673    4717 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 10:45:35.844700    4717 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 10:45:35.913262    4717 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 10:45:36.046244    4717 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 10:45:36.165186    4717 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 10:45:36.315761    4717 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 10:45:36.344072    4717 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 10:45:36.344646    4717 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 10:45:36.344685    4717 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 10:45:36.426389    4717 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 10:45:36.622476    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:36.430458    4717 out.go:235]   - Booting up control plane ...
	I0828 10:45:36.430509    4717 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 10:45:36.430552    4717 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 10:45:36.430591    4717 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 10:45:36.430657    4717 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 10:45:36.433764    4717 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0828 10:45:41.624635    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:45:41.624867    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:45:41.441408    4717 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.007390 seconds
	I0828 10:45:41.441559    4717 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0828 10:45:41.454485    4717 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0828 10:45:41.965394    4717 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0828 10:45:41.965750    4717 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-801000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0828 10:45:42.480050    4717 kubeadm.go:310] [bootstrap-token] Using token: lyjl5u.emnixh7qt156wk4r
	I0828 10:45:42.486742    4717 out.go:235]   - Configuring RBAC rules ...
	I0828 10:45:42.486880    4717 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0828 10:45:42.487007    4717 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0828 10:45:42.494246    4717 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0828 10:45:42.496443    4717 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0828 10:45:42.498498    4717 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0828 10:45:42.500512    4717 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0828 10:45:42.506611    4717 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0828 10:45:42.683912    4717 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0828 10:45:42.886466    4717 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0828 10:45:42.886981    4717 kubeadm.go:310] 
	I0828 10:45:42.887012    4717 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0828 10:45:42.887016    4717 kubeadm.go:310] 
	I0828 10:45:42.887105    4717 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0828 10:45:42.887109    4717 kubeadm.go:310] 
	I0828 10:45:42.887121    4717 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0828 10:45:42.887207    4717 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0828 10:45:42.887235    4717 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0828 10:45:42.887237    4717 kubeadm.go:310] 
	I0828 10:45:42.887270    4717 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0828 10:45:42.887275    4717 kubeadm.go:310] 
	I0828 10:45:42.887365    4717 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0828 10:45:42.887392    4717 kubeadm.go:310] 
	I0828 10:45:42.887467    4717 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0828 10:45:42.887506    4717 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0828 10:45:42.887554    4717 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0828 10:45:42.887558    4717 kubeadm.go:310] 
	I0828 10:45:42.887606    4717 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0828 10:45:42.887651    4717 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0828 10:45:42.887657    4717 kubeadm.go:310] 
	I0828 10:45:42.887756    4717 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lyjl5u.emnixh7qt156wk4r \
	I0828 10:45:42.887804    4717 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5b3c4c1f8e59fd4c25ce08db6b17ec7ac98ea4455ff93445c7a91221249d86a1 \
	I0828 10:45:42.887813    4717 kubeadm.go:310] 	--control-plane 
	I0828 10:45:42.887828    4717 kubeadm.go:310] 
	I0828 10:45:42.887874    4717 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0828 10:45:42.887877    4717 kubeadm.go:310] 
	I0828 10:45:42.887916    4717 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lyjl5u.emnixh7qt156wk4r \
	I0828 10:45:42.888005    4717 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5b3c4c1f8e59fd4c25ce08db6b17ec7ac98ea4455ff93445c7a91221249d86a1 
	I0828 10:45:42.888077    4717 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 10:45:42.888085    4717 cni.go:84] Creating CNI manager for ""
	I0828 10:45:42.888095    4717 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:45:42.892734    4717 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 10:45:42.899653    4717 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 10:45:42.902491    4717 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 10:45:42.907093    4717 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 10:45:42.907133    4717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 10:45:42.907215    4717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-801000 minikube.k8s.io/updated_at=2024_08_28T10_45_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=stopped-upgrade-801000 minikube.k8s.io/primary=true
	I0828 10:45:42.961622    4717 kubeadm.go:1113] duration metric: took 54.52425ms to wait for elevateKubeSystemPrivileges
	I0828 10:45:42.961666    4717 ops.go:34] apiserver oom_adj: -16
	I0828 10:45:42.961812    4717 kubeadm.go:394] duration metric: took 4m12.762148708s to StartCluster
	I0828 10:45:42.961825    4717 settings.go:142] acquiring lock: {Name:mk584f5f183a19e050e7184c0c9e70ea26430337 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:45:42.961909    4717 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:45:42.962325    4717 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/kubeconfig: {Name:mke8b729c65a2ae9e4d9042dc78e2127479f8609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:45:42.962545    4717 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:45:42.962551    4717 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 10:45:42.962588    4717 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-801000"
	I0828 10:45:42.962601    4717 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-801000"
	W0828 10:45:42.962607    4717 addons.go:243] addon storage-provisioner should already be in state true
	I0828 10:45:42.962618    4717 host.go:66] Checking if "stopped-upgrade-801000" exists ...
	I0828 10:45:42.962616    4717 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-801000"
	I0828 10:45:42.962634    4717 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-801000"
	I0828 10:45:42.962670    4717 config.go:182] Loaded profile config "stopped-upgrade-801000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0828 10:45:42.963568    4717 kapi.go:59] client config for stopped-upgrade-801000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/client.key", CAFile:"/Users/jenkins/minikube-integration/19529-1176/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106777eb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0828 10:45:42.963691    4717 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-801000"
	W0828 10:45:42.963695    4717 addons.go:243] addon default-storageclass should already be in state true
	I0828 10:45:42.963702    4717 host.go:66] Checking if "stopped-upgrade-801000" exists ...
	I0828 10:45:42.966569    4717 out.go:177] * Verifying Kubernetes components...
	I0828 10:45:42.966926    4717 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 10:45:42.970850    4717 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 10:45:42.970856    4717 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/stopped-upgrade-801000/id_rsa Username:docker}
	I0828 10:45:42.974555    4717 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 10:45:42.978621    4717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:45:42.982648    4717 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 10:45:42.982654    4717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 10:45:42.982660    4717 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/stopped-upgrade-801000/id_rsa Username:docker}
	I0828 10:45:43.067568    4717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 10:45:43.073131    4717 api_server.go:52] waiting for apiserver process to appear ...
	I0828 10:45:43.073179    4717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 10:45:43.076999    4717 api_server.go:72] duration metric: took 114.444416ms to wait for apiserver process to appear ...
	I0828 10:45:43.077008    4717 api_server.go:88] waiting for apiserver healthz status ...
	I0828 10:45:43.077015    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:43.115897    4717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 10:45:43.128043    4717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 10:45:43.500126    4717 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0828 10:45:43.500139    4717 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0828 10:45:41.650333    4578 logs.go:276] 1 containers: [d751e569ea31]
	I0828 10:45:41.650448    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:45:41.666820    4578 logs.go:276] 1 containers: [f3ab42a808f3]
	I0828 10:45:41.666908    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:45:41.680480    4578 logs.go:276] 4 containers: [d2115075a059 6ddcad2204e5 e251198522b1 f352e786668a]
	I0828 10:45:41.680551    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:45:41.692155    4578 logs.go:276] 1 containers: [d378c1964053]
	I0828 10:45:41.692221    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:45:41.702909    4578 logs.go:276] 1 containers: [927c8d8912e6]
	I0828 10:45:41.702976    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:45:41.717146    4578 logs.go:276] 1 containers: [6b81eae0040a]
	I0828 10:45:41.717220    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:45:41.727179    4578 logs.go:276] 0 containers: []
	W0828 10:45:41.727191    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:45:41.727246    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:45:41.738188    4578 logs.go:276] 1 containers: [ed2f4076ae8f]
	I0828 10:45:41.738207    4578 logs.go:123] Gathering logs for kube-apiserver [d751e569ea31] ...
	I0828 10:45:41.738212    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d751e569ea31"
	I0828 10:45:41.752640    4578 logs.go:123] Gathering logs for coredns [6ddcad2204e5] ...
	I0828 10:45:41.752651    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddcad2204e5"
	I0828 10:45:41.764746    4578 logs.go:123] Gathering logs for kube-scheduler [d378c1964053] ...
	I0828 10:45:41.764755    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d378c1964053"
	I0828 10:45:41.779521    4578 logs.go:123] Gathering logs for kube-controller-manager [6b81eae0040a] ...
	I0828 10:45:41.779534    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b81eae0040a"
	I0828 10:45:41.797018    4578 logs.go:123] Gathering logs for kube-proxy [927c8d8912e6] ...
	I0828 10:45:41.797033    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927c8d8912e6"
	I0828 10:45:41.809569    4578 logs.go:123] Gathering logs for storage-provisioner [ed2f4076ae8f] ...
	I0828 10:45:41.809581    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed2f4076ae8f"
	I0828 10:45:41.828336    4578 logs.go:123] Gathering logs for coredns [d2115075a059] ...
	I0828 10:45:41.828346    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2115075a059"
	I0828 10:45:41.840818    4578 logs.go:123] Gathering logs for coredns [e251198522b1] ...
	I0828 10:45:41.840828    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e251198522b1"
	I0828 10:45:41.852919    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:45:41.852930    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:45:41.865348    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:45:41.865360    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 10:45:41.900244    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:45:41.900345    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:45:41.901631    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:45:41.901640    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:45:41.906605    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:45:41.906614    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:45:41.942272    4578 logs.go:123] Gathering logs for etcd [f3ab42a808f3] ...
	I0828 10:45:41.942284    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ab42a808f3"
	I0828 10:45:41.956721    4578 logs.go:123] Gathering logs for coredns [f352e786668a] ...
	I0828 10:45:41.956735    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f352e786668a"
	I0828 10:45:41.969308    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:45:41.969318    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:45:41.993936    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:45:41.993948    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 10:45:41.993977    4578 out.go:270] X Problems detected in kubelet:
	W0828 10:45:41.993982    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:45:41.993986    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:45:41.993989    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:45:41.993992    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:45:48.078951    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:45:48.079042    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:53.079217    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:45:53.079237    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:51.997802    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:58.079467    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:45:58.079491    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:56.999896    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:45:57.000088    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:45:57.012672    4578 logs.go:276] 1 containers: [d751e569ea31]
	I0828 10:45:57.012744    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:45:57.023683    4578 logs.go:276] 1 containers: [f3ab42a808f3]
	I0828 10:45:57.023752    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:45:57.034718    4578 logs.go:276] 4 containers: [d2115075a059 6ddcad2204e5 e251198522b1 f352e786668a]
	I0828 10:45:57.034792    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:45:57.046001    4578 logs.go:276] 1 containers: [d378c1964053]
	I0828 10:45:57.046064    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:45:57.058321    4578 logs.go:276] 1 containers: [927c8d8912e6]
	I0828 10:45:57.058388    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:45:57.069367    4578 logs.go:276] 1 containers: [6b81eae0040a]
	I0828 10:45:57.069431    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:45:57.081160    4578 logs.go:276] 0 containers: []
	W0828 10:45:57.081173    4578 logs.go:278] No container was found matching "kindnet"
	I0828 10:45:57.081235    4578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:45:57.093539    4578 logs.go:276] 1 containers: [ed2f4076ae8f]
	I0828 10:45:57.093560    4578 logs.go:123] Gathering logs for kubelet ...
	I0828 10:45:57.093567    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 10:45:57.129312    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:45:57.129420    4578 logs.go:138] Found kubelet problem: Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:45:57.130755    4578 logs.go:123] Gathering logs for kube-apiserver [d751e569ea31] ...
	I0828 10:45:57.130768    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d751e569ea31"
	I0828 10:45:57.147521    4578 logs.go:123] Gathering logs for kube-scheduler [d378c1964053] ...
	I0828 10:45:57.147543    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d378c1964053"
	I0828 10:45:57.163087    4578 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:45:57.163101    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:45:57.211321    4578 logs.go:123] Gathering logs for coredns [6ddcad2204e5] ...
	I0828 10:45:57.211331    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddcad2204e5"
	I0828 10:45:57.227206    4578 logs.go:123] Gathering logs for kube-proxy [927c8d8912e6] ...
	I0828 10:45:57.227218    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 927c8d8912e6"
	I0828 10:45:57.243642    4578 logs.go:123] Gathering logs for coredns [d2115075a059] ...
	I0828 10:45:57.243658    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2115075a059"
	I0828 10:45:57.263157    4578 logs.go:123] Gathering logs for coredns [f352e786668a] ...
	I0828 10:45:57.263169    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f352e786668a"
	I0828 10:45:57.278454    4578 logs.go:123] Gathering logs for Docker ...
	I0828 10:45:57.278470    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:45:57.302928    4578 logs.go:123] Gathering logs for container status ...
	I0828 10:45:57.302946    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:45:57.319258    4578 logs.go:123] Gathering logs for dmesg ...
	I0828 10:45:57.319270    4578 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:45:57.324695    4578 logs.go:123] Gathering logs for etcd [f3ab42a808f3] ...
	I0828 10:45:57.324711    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3ab42a808f3"
	I0828 10:45:57.339922    4578 logs.go:123] Gathering logs for coredns [e251198522b1] ...
	I0828 10:45:57.339935    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e251198522b1"
	I0828 10:45:57.353006    4578 logs.go:123] Gathering logs for kube-controller-manager [6b81eae0040a] ...
	I0828 10:45:57.353021    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b81eae0040a"
	I0828 10:45:57.371047    4578 logs.go:123] Gathering logs for storage-provisioner [ed2f4076ae8f] ...
	I0828 10:45:57.371057    4578 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed2f4076ae8f"
	I0828 10:45:57.385541    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:45:57.385551    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0828 10:45:57.385578    4578 out.go:270] X Problems detected in kubelet:
	W0828 10:45:57.385583    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	W0828 10:45:57.385600    4578 out.go:270]   Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	I0828 10:45:57.385606    4578 out.go:358] Setting ErrFile to fd 2...
	I0828 10:45:57.385610    4578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:46:03.079732    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:46:03.079757    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:46:08.080482    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:46:08.080513    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:46:07.388606    4578 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:46:12.389406    4578 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:46:12.393600    4578 out.go:201] 
	W0828 10:46:12.397288    4578 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0828 10:46:12.397306    4578 out.go:270] * 
	W0828 10:46:12.398548    4578 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:46:12.409356    4578 out.go:201] 
	I0828 10:46:13.081132    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:46:13.081153    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0828 10:46:13.501487    4717 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0828 10:46:13.505735    4717 out.go:177] * Enabled addons: storage-provisioner
	I0828 10:46:13.515721    4717 addons.go:510] duration metric: took 30.554191209s for enable addons: enabled=[storage-provisioner]
	I0828 10:46:18.081282    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:46:18.081308    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:46:23.082188    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:46:23.082212    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Wed 2024-08-28 17:37:19 UTC, ends at Wed 2024-08-28 17:46:28 UTC. --
	Aug 28 17:46:08 running-upgrade-717000 dockerd[2842]: time="2024-08-28T17:46:08.963808736Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/89ef349e78ff54a452defc10402430dd31050a4b1427258fbd9d8a19900ae77f pid=16318 runtime=io.containerd.runc.v2
	Aug 28 17:46:09 running-upgrade-717000 cri-dockerd[2682]: time="2024-08-28T17:46:09Z" level=error msg="ContainerStats resp: {0x40009f31c0 linux}"
	Aug 28 17:46:09 running-upgrade-717000 cri-dockerd[2682]: time="2024-08-28T17:46:09Z" level=error msg="ContainerStats resp: {0x4000896c80 linux}"
	Aug 28 17:46:10 running-upgrade-717000 cri-dockerd[2682]: time="2024-08-28T17:46:10Z" level=error msg="ContainerStats resp: {0x400087bf40 linux}"
	Aug 28 17:46:11 running-upgrade-717000 cri-dockerd[2682]: time="2024-08-28T17:46:11Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 28 17:46:11 running-upgrade-717000 cri-dockerd[2682]: time="2024-08-28T17:46:11Z" level=error msg="ContainerStats resp: {0x400024b100 linux}"
	Aug 28 17:46:11 running-upgrade-717000 cri-dockerd[2682]: time="2024-08-28T17:46:11Z" level=error msg="ContainerStats resp: {0x40003ee900 linux}"
	Aug 28 17:46:11 running-upgrade-717000 cri-dockerd[2682]: time="2024-08-28T17:46:11Z" level=error msg="ContainerStats resp: {0x40003eef80 linux}"
	Aug 28 17:46:11 running-upgrade-717000 cri-dockerd[2682]: time="2024-08-28T17:46:11Z" level=error msg="ContainerStats resp: {0x40003ef4c0 linux}"
	Aug 28 17:46:11 running-upgrade-717000 cri-dockerd[2682]: time="2024-08-28T17:46:11Z" level=error msg="ContainerStats resp: {0x40003a0a80 linux}"
	Aug 28 17:46:11 running-upgrade-717000 cri-dockerd[2682]: time="2024-08-28T17:46:11Z" level=error msg="ContainerStats resp: {0x40003a1340 linux}"
	Aug 28 17:46:11 running-upgrade-717000 cri-dockerd[2682]: time="2024-08-28T17:46:11Z" level=error msg="ContainerStats resp: {0x4000896c00 linux}"
	Aug 28 17:46:16 running-upgrade-717000 cri-dockerd[2682]: time="2024-08-28T17:46:16Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 28 17:46:21 running-upgrade-717000 cri-dockerd[2682]: time="2024-08-28T17:46:21Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 28 17:46:21 running-upgrade-717000 cri-dockerd[2682]: time="2024-08-28T17:46:21Z" level=error msg="ContainerStats resp: {0x400024a380 linux}"
	Aug 28 17:46:21 running-upgrade-717000 cri-dockerd[2682]: time="2024-08-28T17:46:21Z" level=error msg="ContainerStats resp: {0x400024b400 linux}"
	Aug 28 17:46:22 running-upgrade-717000 cri-dockerd[2682]: time="2024-08-28T17:46:22Z" level=error msg="ContainerStats resp: {0x40003ef9c0 linux}"
	Aug 28 17:46:23 running-upgrade-717000 cri-dockerd[2682]: time="2024-08-28T17:46:23Z" level=error msg="ContainerStats resp: {0x40003a1f80 linux}"
	Aug 28 17:46:23 running-upgrade-717000 cri-dockerd[2682]: time="2024-08-28T17:46:23Z" level=error msg="ContainerStats resp: {0x4000896900 linux}"
	Aug 28 17:46:23 running-upgrade-717000 cri-dockerd[2682]: time="2024-08-28T17:46:23Z" level=error msg="ContainerStats resp: {0x4000796940 linux}"
	Aug 28 17:46:23 running-upgrade-717000 cri-dockerd[2682]: time="2024-08-28T17:46:23Z" level=error msg="ContainerStats resp: {0x4000796f80 linux}"
	Aug 28 17:46:23 running-upgrade-717000 cri-dockerd[2682]: time="2024-08-28T17:46:23Z" level=error msg="ContainerStats resp: {0x4000797600 linux}"
	Aug 28 17:46:23 running-upgrade-717000 cri-dockerd[2682]: time="2024-08-28T17:46:23Z" level=error msg="ContainerStats resp: {0x40007977c0 linux}"
	Aug 28 17:46:23 running-upgrade-717000 cri-dockerd[2682]: time="2024-08-28T17:46:23Z" level=error msg="ContainerStats resp: {0x4000797a80 linux}"
	Aug 28 17:46:26 running-upgrade-717000 cri-dockerd[2682]: time="2024-08-28T17:46:26Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	89ef349e78ff5       edaa71f2aee88       20 seconds ago      Running             coredns                   2                   93fc7fd5700fa
	ed5bb90dfb0ff       edaa71f2aee88       20 seconds ago      Running             coredns                   2                   89de6b022726f
	d2115075a059c       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   93fc7fd5700fa
	6ddcad2204e51       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   89de6b022726f
	ed2f4076ae8f9       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   60fcac9f02331
	927c8d8912e6f       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   dd7d4159846c7
	f3ab42a808f3e       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   a4480c27611ad
	d378c1964053f       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   440d1076255ca
	6b81eae0040aa       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   2b02af143fab6
	d751e569ea31b       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   1ac256bbb5a63
	
	
	==> coredns [6ddcad2204e5] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4687981192988133579.1399777341405923864. HINFO: read udp 10.244.0.2:43450->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4687981192988133579.1399777341405923864. HINFO: read udp 10.244.0.2:53850->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4687981192988133579.1399777341405923864. HINFO: read udp 10.244.0.2:40654->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4687981192988133579.1399777341405923864. HINFO: read udp 10.244.0.2:51103->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4687981192988133579.1399777341405923864. HINFO: read udp 10.244.0.2:45121->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4687981192988133579.1399777341405923864. HINFO: read udp 10.244.0.2:36440->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4687981192988133579.1399777341405923864. HINFO: read udp 10.244.0.2:50015->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4687981192988133579.1399777341405923864. HINFO: read udp 10.244.0.2:57234->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4687981192988133579.1399777341405923864. HINFO: read udp 10.244.0.2:45684->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4687981192988133579.1399777341405923864. HINFO: read udp 10.244.0.2:52198->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [89ef349e78ff] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7689613527774122890.7581902846312277839. HINFO: read udp 10.244.0.3:33829->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7689613527774122890.7581902846312277839. HINFO: read udp 10.244.0.3:32958->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7689613527774122890.7581902846312277839. HINFO: read udp 10.244.0.3:51922->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7689613527774122890.7581902846312277839. HINFO: read udp 10.244.0.3:52888->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7689613527774122890.7581902846312277839. HINFO: read udp 10.244.0.3:60055->10.0.2.3:53: i/o timeout
	
	
	==> coredns [d2115075a059] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 9067680223471882259.3601378962886711739. HINFO: read udp 10.244.0.3:38138->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9067680223471882259.3601378962886711739. HINFO: read udp 10.244.0.3:58724->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9067680223471882259.3601378962886711739. HINFO: read udp 10.244.0.3:50727->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9067680223471882259.3601378962886711739. HINFO: read udp 10.244.0.3:33398->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9067680223471882259.3601378962886711739. HINFO: read udp 10.244.0.3:41865->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9067680223471882259.3601378962886711739. HINFO: read udp 10.244.0.3:41726->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9067680223471882259.3601378962886711739. HINFO: read udp 10.244.0.3:35265->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9067680223471882259.3601378962886711739. HINFO: read udp 10.244.0.3:46234->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9067680223471882259.3601378962886711739. HINFO: read udp 10.244.0.3:58123->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9067680223471882259.3601378962886711739. HINFO: read udp 10.244.0.3:33702->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ed5bb90dfb0f] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8783220464612238262.4047764254938013767. HINFO: read udp 10.244.0.2:41324->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8783220464612238262.4047764254938013767. HINFO: read udp 10.244.0.2:51579->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8783220464612238262.4047764254938013767. HINFO: read udp 10.244.0.2:38096->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8783220464612238262.4047764254938013767. HINFO: read udp 10.244.0.2:39266->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8783220464612238262.4047764254938013767. HINFO: read udp 10.244.0.2:36344->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-717000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-717000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=running-upgrade-717000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_28T10_42_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 17:42:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-717000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:46:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:42:07 +0000   Wed, 28 Aug 2024 17:42:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:42:07 +0000   Wed, 28 Aug 2024 17:42:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:42:07 +0000   Wed, 28 Aug 2024 17:42:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:42:07 +0000   Wed, 28 Aug 2024 17:42:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-717000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 9747859e0f844a88a28bf098696efa69
	  System UUID:                9747859e0f844a88a28bf098696efa69
	  Boot ID:                    44738ddf-a359-48d3-bde2-7a0fbb841cde
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-894vj                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m8s
	  kube-system                 coredns-6d4b75cb6d-lntll                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m8s
	  kube-system                 etcd-running-upgrade-717000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m20s
	  kube-system                 kube-apiserver-running-upgrade-717000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-controller-manager-running-upgrade-717000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-proxy-vhbjx                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-scheduler-running-upgrade-717000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m7s   kube-proxy       
	  Normal  NodeReady                4m21s  kubelet          Node running-upgrade-717000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m21s  kubelet          Node running-upgrade-717000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s  kubelet          Node running-upgrade-717000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s  kubelet          Node running-upgrade-717000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m21s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m9s   node-controller  Node running-upgrade-717000 event: Registered Node running-upgrade-717000 in Controller
	
	
	==> dmesg <==
	[  +1.592692] systemd-fstab-generator[877]: Ignoring "noauto" for root device
	[  +0.075615] systemd-fstab-generator[888]: Ignoring "noauto" for root device
	[  +0.077876] systemd-fstab-generator[899]: Ignoring "noauto" for root device
	[  +1.135649] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.090830] systemd-fstab-generator[1050]: Ignoring "noauto" for root device
	[  +0.069529] systemd-fstab-generator[1061]: Ignoring "noauto" for root device
	[  +2.460085] systemd-fstab-generator[1288]: Ignoring "noauto" for root device
	[  +9.664172] systemd-fstab-generator[1943]: Ignoring "noauto" for root device
	[  +2.504350] systemd-fstab-generator[2204]: Ignoring "noauto" for root device
	[  +0.153595] systemd-fstab-generator[2238]: Ignoring "noauto" for root device
	[  +0.094903] systemd-fstab-generator[2252]: Ignoring "noauto" for root device
	[  +0.097778] systemd-fstab-generator[2267]: Ignoring "noauto" for root device
	[  +1.508476] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.111602] systemd-fstab-generator[2639]: Ignoring "noauto" for root device
	[  +0.077162] systemd-fstab-generator[2650]: Ignoring "noauto" for root device
	[  +0.067797] systemd-fstab-generator[2661]: Ignoring "noauto" for root device
	[  +0.086892] systemd-fstab-generator[2675]: Ignoring "noauto" for root device
	[  +2.365378] systemd-fstab-generator[2827]: Ignoring "noauto" for root device
	[  +3.567659] systemd-fstab-generator[3202]: Ignoring "noauto" for root device
	[  +1.029177] systemd-fstab-generator[3418]: Ignoring "noauto" for root device
	[Aug28 17:38] kauditd_printk_skb: 68 callbacks suppressed
	[Aug28 17:41] kauditd_printk_skb: 25 callbacks suppressed
	[Aug28 17:42] systemd-fstab-generator[10824]: Ignoring "noauto" for root device
	[  +5.646771] systemd-fstab-generator[11420]: Ignoring "noauto" for root device
	[  +0.454877] systemd-fstab-generator[11550]: Ignoring "noauto" for root device
	
	
	==> etcd [f3ab42a808f3] <==
	{"level":"info","ts":"2024-08-28T17:42:03.192Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-28T17:42:03.192Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-28T17:42:03.192Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-28T17:42:03.192Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-28T17:42:03.192Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-08-28T17:42:03.192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-28T17:42:03.192Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-28T17:42:03.737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-28T17:42:03.737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-28T17:42:03.737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-28T17:42:03.737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-28T17:42:03.737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-28T17:42:03.737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-28T17:42:03.737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-28T17:42:03.737Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-717000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-28T17:42:03.737Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T17:42:03.737Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T17:42:03.738Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T17:42:03.738Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T17:42:03.738Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T17:42:03.738Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T17:42:03.738Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-08-28T17:42:03.738Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-28T17:42:03.738Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-28T17:42:03.739Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 17:46:28 up 9 min,  0 users,  load average: 0.13, 0.27, 0.16
	Linux running-upgrade-717000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [d751e569ea31] <==
	I0828 17:42:04.984653       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0828 17:42:04.989284       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0828 17:42:04.989402       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0828 17:42:04.989970       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0828 17:42:04.990565       1 cache.go:39] Caches are synced for autoregister controller
	I0828 17:42:05.022175       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0828 17:42:05.029907       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0828 17:42:05.713806       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0828 17:42:05.893329       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0828 17:42:05.895942       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0828 17:42:05.896000       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0828 17:42:06.025611       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0828 17:42:06.040667       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0828 17:42:06.073843       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0828 17:42:06.075849       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0828 17:42:06.076254       1 controller.go:611] quota admission added evaluator for: endpoints
	I0828 17:42:06.077481       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0828 17:42:07.011520       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0828 17:42:07.602741       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0828 17:42:07.606005       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0828 17:42:07.614535       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0828 17:42:07.663101       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0828 17:42:19.968113       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0828 17:42:20.666588       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0828 17:42:21.223316       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [6b81eae0040a] <==
	I0828 17:42:19.861138       1 shared_informer.go:262] Caches are synced for job
	I0828 17:42:19.865503       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0828 17:42:19.865512       1 shared_informer.go:262] Caches are synced for daemon sets
	I0828 17:42:19.865525       1 shared_informer.go:262] Caches are synced for disruption
	I0828 17:42:19.865528       1 disruption.go:371] Sending events to api server.
	I0828 17:42:19.865624       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0828 17:42:19.871205       1 shared_informer.go:262] Caches are synced for namespace
	I0828 17:42:19.914965       1 shared_informer.go:262] Caches are synced for stateful set
	I0828 17:42:19.935103       1 shared_informer.go:262] Caches are synced for endpoint
	I0828 17:42:19.965534       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0828 17:42:19.965598       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0828 17:42:19.969607       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0828 17:42:20.014378       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0828 17:42:20.065097       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0828 17:42:20.065168       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0828 17:42:20.065106       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0828 17:42:20.065191       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0828 17:42:20.067365       1 shared_informer.go:262] Caches are synced for resource quota
	I0828 17:42:20.069253       1 shared_informer.go:262] Caches are synced for resource quota
	I0828 17:42:20.477234       1 shared_informer.go:262] Caches are synced for garbage collector
	I0828 17:42:20.563770       1 shared_informer.go:262] Caches are synced for garbage collector
	I0828 17:42:20.563859       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0828 17:42:20.670742       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vhbjx"
	I0828 17:42:20.817920       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-894vj"
	I0828 17:42:20.822272       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-lntll"
	
	
	==> kube-proxy [927c8d8912e6] <==
	I0828 17:42:21.197132       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0828 17:42:21.197164       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0828 17:42:21.197173       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0828 17:42:21.221292       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0828 17:42:21.221302       1 server_others.go:206] "Using iptables Proxier"
	I0828 17:42:21.221317       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0828 17:42:21.221445       1 server.go:661] "Version info" version="v1.24.1"
	I0828 17:42:21.221449       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 17:42:21.221686       1 config.go:317] "Starting service config controller"
	I0828 17:42:21.221692       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0828 17:42:21.221700       1 config.go:226] "Starting endpoint slice config controller"
	I0828 17:42:21.221702       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0828 17:42:21.222478       1 config.go:444] "Starting node config controller"
	I0828 17:42:21.222481       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0828 17:42:21.321983       1 shared_informer.go:262] Caches are synced for service config
	I0828 17:42:21.345366       1 shared_informer.go:262] Caches are synced for node config
	I0828 17:42:21.353362       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d378c1964053] <==
	W0828 17:42:04.949588       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0828 17:42:04.950983       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0828 17:42:04.949616       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0828 17:42:04.951028       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0828 17:42:04.949644       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0828 17:42:04.951061       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0828 17:42:04.949672       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0828 17:42:04.951102       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0828 17:42:04.949690       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0828 17:42:04.951133       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0828 17:42:04.949708       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0828 17:42:04.951163       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0828 17:42:04.949729       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0828 17:42:04.951206       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0828 17:42:05.772012       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0828 17:42:05.772027       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0828 17:42:05.819708       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0828 17:42:05.819727       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0828 17:42:05.843705       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0828 17:42:05.843784       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0828 17:42:05.949799       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0828 17:42:05.949891       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0828 17:42:05.967687       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0828 17:42:05.967712       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0828 17:42:06.137223       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-08-28 17:37:19 UTC, ends at Wed 2024-08-28 17:46:28 UTC. --
	Aug 28 17:42:09 running-upgrade-717000 kubelet[11426]: E0828 17:42:09.242887   11426 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-717000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-717000"
	Aug 28 17:42:09 running-upgrade-717000 kubelet[11426]: E0828 17:42:09.439848   11426 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-717000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-717000"
	Aug 28 17:42:09 running-upgrade-717000 kubelet[11426]: E0828 17:42:09.640267   11426 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-717000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-717000"
	Aug 28 17:42:09 running-upgrade-717000 kubelet[11426]: I0828 17:42:09.838875   11426 request.go:601] Waited for 1.125200156s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Aug 28 17:42:09 running-upgrade-717000 kubelet[11426]: E0828 17:42:09.841465   11426 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-717000\" already exists" pod="kube-system/etcd-running-upgrade-717000"
	Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: I0828 17:42:19.823071   11426 topology_manager.go:200] "Topology Admit Handler"
	Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: W0828 17:42:19.826295   11426 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: E0828 17:42:19.826323   11426 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-717000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-717000' and this object
	Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: I0828 17:42:19.848279   11426 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: I0828 17:42:19.848631   11426 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: I0828 17:42:19.950706   11426 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njj2p\" (UniqueName: \"kubernetes.io/projected/487626ec-08f2-4259-a28d-299fa7cd0973-kube-api-access-njj2p\") pod \"storage-provisioner\" (UID: \"487626ec-08f2-4259-a28d-299fa7cd0973\") " pod="kube-system/storage-provisioner"
	Aug 28 17:42:19 running-upgrade-717000 kubelet[11426]: I0828 17:42:19.950740   11426 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/487626ec-08f2-4259-a28d-299fa7cd0973-tmp\") pod \"storage-provisioner\" (UID: \"487626ec-08f2-4259-a28d-299fa7cd0973\") " pod="kube-system/storage-provisioner"
	Aug 28 17:42:20 running-upgrade-717000 kubelet[11426]: I0828 17:42:20.671309   11426 topology_manager.go:200] "Topology Admit Handler"
	Aug 28 17:42:20 running-upgrade-717000 kubelet[11426]: I0828 17:42:20.820065   11426 topology_manager.go:200] "Topology Admit Handler"
	Aug 28 17:42:20 running-upgrade-717000 kubelet[11426]: I0828 17:42:20.829103   11426 topology_manager.go:200] "Topology Admit Handler"
	Aug 28 17:42:20 running-upgrade-717000 kubelet[11426]: I0828 17:42:20.858542   11426 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91b4985e-10bb-47b5-a3ef-80cf48770a74-lib-modules\") pod \"kube-proxy-vhbjx\" (UID: \"91b4985e-10bb-47b5-a3ef-80cf48770a74\") " pod="kube-system/kube-proxy-vhbjx"
	Aug 28 17:42:20 running-upgrade-717000 kubelet[11426]: I0828 17:42:20.858569   11426 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d94ss\" (UniqueName: \"kubernetes.io/projected/91b4985e-10bb-47b5-a3ef-80cf48770a74-kube-api-access-d94ss\") pod \"kube-proxy-vhbjx\" (UID: \"91b4985e-10bb-47b5-a3ef-80cf48770a74\") " pod="kube-system/kube-proxy-vhbjx"
	Aug 28 17:42:20 running-upgrade-717000 kubelet[11426]: I0828 17:42:20.858580   11426 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/91b4985e-10bb-47b5-a3ef-80cf48770a74-kube-proxy\") pod \"kube-proxy-vhbjx\" (UID: \"91b4985e-10bb-47b5-a3ef-80cf48770a74\") " pod="kube-system/kube-proxy-vhbjx"
	Aug 28 17:42:20 running-upgrade-717000 kubelet[11426]: I0828 17:42:20.858591   11426 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91b4985e-10bb-47b5-a3ef-80cf48770a74-xtables-lock\") pod \"kube-proxy-vhbjx\" (UID: \"91b4985e-10bb-47b5-a3ef-80cf48770a74\") " pod="kube-system/kube-proxy-vhbjx"
	Aug 28 17:42:20 running-upgrade-717000 kubelet[11426]: I0828 17:42:20.958901   11426 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmdj9\" (UniqueName: \"kubernetes.io/projected/9651377c-e91b-4255-8120-e2073bbe106e-kube-api-access-tmdj9\") pod \"coredns-6d4b75cb6d-lntll\" (UID: \"9651377c-e91b-4255-8120-e2073bbe106e\") " pod="kube-system/coredns-6d4b75cb6d-lntll"
	Aug 28 17:42:20 running-upgrade-717000 kubelet[11426]: I0828 17:42:20.959028   11426 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qjnf\" (UniqueName: \"kubernetes.io/projected/0ca61ebb-a68b-42ff-aa9a-abb01fea6da3-kube-api-access-6qjnf\") pod \"coredns-6d4b75cb6d-894vj\" (UID: \"0ca61ebb-a68b-42ff-aa9a-abb01fea6da3\") " pod="kube-system/coredns-6d4b75cb6d-894vj"
	Aug 28 17:42:20 running-upgrade-717000 kubelet[11426]: I0828 17:42:20.959076   11426 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ca61ebb-a68b-42ff-aa9a-abb01fea6da3-config-volume\") pod \"coredns-6d4b75cb6d-894vj\" (UID: \"0ca61ebb-a68b-42ff-aa9a-abb01fea6da3\") " pod="kube-system/coredns-6d4b75cb6d-894vj"
	Aug 28 17:42:20 running-upgrade-717000 kubelet[11426]: I0828 17:42:20.959111   11426 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9651377c-e91b-4255-8120-e2073bbe106e-config-volume\") pod \"coredns-6d4b75cb6d-lntll\" (UID: \"9651377c-e91b-4255-8120-e2073bbe106e\") " pod="kube-system/coredns-6d4b75cb6d-lntll"
	Aug 28 17:46:09 running-upgrade-717000 kubelet[11426]: I0828 17:46:09.325450   11426 scope.go:110] "RemoveContainer" containerID="e251198522b108f55933fd357359712ad16867c353ca6b00301afa8fe2e110ec"
	Aug 28 17:46:09 running-upgrade-717000 kubelet[11426]: I0828 17:46:09.339141   11426 scope.go:110] "RemoveContainer" containerID="f352e786668a4f423a83a3d7bd00d725af3c29d867ee50501a0dfd539c9337a5"
	
	
	==> storage-provisioner [ed2f4076ae8f] <==
	I0828 17:42:21.472323       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0828 17:42:21.487319       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0828 17:42:21.487335       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0828 17:42:21.492110       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0828 17:42:21.492177       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-717000_be26a17f-6344-45d8-9eab-3c6aafeeff4c!
	I0828 17:42:21.492201       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5f4d4312-3ca8-4d4d-abe5-309f31b5f317", APIVersion:"v1", ResourceVersion:"362", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-717000_be26a17f-6344-45d8-9eab-3c6aafeeff4c became leader
	I0828 17:42:21.592320       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-717000_be26a17f-6344-45d8-9eab-3c6aafeeff4c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-717000 -n running-upgrade-717000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-717000 -n running-upgrade-717000: exit status 2 (15.682838167s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-717000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-717000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-717000
--- FAIL: TestRunningBinaryUpgrade (599.86s)

                                                
                                    
x
+
TestKubernetesUpgrade (19.3s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-149000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-149000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.194266625s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-149000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-149000" primary control-plane node in "kubernetes-upgrade-149000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-149000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:39:45.214103    4643 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:39:45.214249    4643 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:39:45.214252    4643 out.go:358] Setting ErrFile to fd 2...
	I0828 10:39:45.214255    4643 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:39:45.214391    4643 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:39:45.215433    4643 out.go:352] Setting JSON to false
	I0828 10:39:45.231498    4643 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4149,"bootTime":1724862636,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:39:45.231586    4643 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:39:45.237439    4643 out.go:177] * [kubernetes-upgrade-149000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:39:45.241268    4643 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:39:45.241307    4643 notify.go:220] Checking for updates...
	I0828 10:39:45.250271    4643 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:39:45.253315    4643 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:39:45.261222    4643 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:39:45.269079    4643 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:39:45.276311    4643 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:39:45.279565    4643 config.go:182] Loaded profile config "multinode-223000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:39:45.279633    4643 config.go:182] Loaded profile config "running-upgrade-717000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0828 10:39:45.279678    4643 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:39:45.283282    4643 out.go:177] * Using the qemu2 driver based on user configuration
	I0828 10:39:45.290270    4643 start.go:297] selected driver: qemu2
	I0828 10:39:45.290276    4643 start.go:901] validating driver "qemu2" against <nil>
	I0828 10:39:45.290282    4643 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:39:45.292483    4643 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 10:39:45.296293    4643 out.go:177] * Automatically selected the socket_vmnet network
	I0828 10:39:45.301435    4643 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0828 10:39:45.301450    4643 cni.go:84] Creating CNI manager for ""
	I0828 10:39:45.301456    4643 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0828 10:39:45.301478    4643 start.go:340] cluster config:
	{Name:kubernetes-upgrade-149000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-149000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:39:45.304795    4643 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:39:45.312283    4643 out.go:177] * Starting "kubernetes-upgrade-149000" primary control-plane node in "kubernetes-upgrade-149000" cluster
	I0828 10:39:45.315218    4643 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0828 10:39:45.315232    4643 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0828 10:39:45.315240    4643 cache.go:56] Caching tarball of preloaded images
	I0828 10:39:45.315321    4643 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:39:45.315327    4643 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0828 10:39:45.315391    4643 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/kubernetes-upgrade-149000/config.json ...
	I0828 10:39:45.315402    4643 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/kubernetes-upgrade-149000/config.json: {Name:mkd293f4a6d83300b19f51ef319a1f807f4b549f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:39:45.315617    4643 start.go:360] acquireMachinesLock for kubernetes-upgrade-149000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:39:45.315649    4643 start.go:364] duration metric: took 26.25µs to acquireMachinesLock for "kubernetes-upgrade-149000"
	I0828 10:39:45.315659    4643 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-149000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:39:45.315685    4643 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:39:45.319404    4643 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 10:39:45.335156    4643 start.go:159] libmachine.API.Create for "kubernetes-upgrade-149000" (driver="qemu2")
	I0828 10:39:45.335189    4643 client.go:168] LocalClient.Create starting
	I0828 10:39:45.335257    4643 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:39:45.335294    4643 main.go:141] libmachine: Decoding PEM data...
	I0828 10:39:45.335306    4643 main.go:141] libmachine: Parsing certificate...
	I0828 10:39:45.335346    4643 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:39:45.335375    4643 main.go:141] libmachine: Decoding PEM data...
	I0828 10:39:45.335386    4643 main.go:141] libmachine: Parsing certificate...
	I0828 10:39:45.335717    4643 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:39:45.499381    4643 main.go:141] libmachine: Creating SSH key...
	I0828 10:39:45.565187    4643 main.go:141] libmachine: Creating Disk image...
	I0828 10:39:45.565192    4643 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:39:45.565383    4643 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2
	I0828 10:39:45.574704    4643 main.go:141] libmachine: STDOUT: 
	I0828 10:39:45.574733    4643 main.go:141] libmachine: STDERR: 
	I0828 10:39:45.574785    4643 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2 +20000M
	I0828 10:39:45.582798    4643 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:39:45.582823    4643 main.go:141] libmachine: STDERR: 
	I0828 10:39:45.582838    4643 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2
	I0828 10:39:45.582842    4643 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:39:45.582852    4643 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:39:45.582878    4643 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:6a:4a:ab:fa:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2
	I0828 10:39:45.584443    4643 main.go:141] libmachine: STDOUT: 
	I0828 10:39:45.584459    4643 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:39:45.584475    4643 client.go:171] duration metric: took 249.291333ms to LocalClient.Create
	I0828 10:39:47.586170    4643 start.go:128] duration metric: took 2.270552959s to createHost
	I0828 10:39:47.586201    4643 start.go:83] releasing machines lock for "kubernetes-upgrade-149000", held for 2.270627709s
	W0828 10:39:47.586260    4643 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:39:47.595496    4643 out.go:177] * Deleting "kubernetes-upgrade-149000" in qemu2 ...
	W0828 10:39:47.619808    4643 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:39:47.619818    4643 start.go:729] Will try again in 5 seconds ...
	I0828 10:39:52.621830    4643 start.go:360] acquireMachinesLock for kubernetes-upgrade-149000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:39:52.621991    4643 start.go:364] duration metric: took 118µs to acquireMachinesLock for "kubernetes-upgrade-149000"
	I0828 10:39:52.622028    4643 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-149000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:39:52.622075    4643 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:39:52.629284    4643 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 10:39:52.645047    4643 start.go:159] libmachine.API.Create for "kubernetes-upgrade-149000" (driver="qemu2")
	I0828 10:39:52.645077    4643 client.go:168] LocalClient.Create starting
	I0828 10:39:52.645145    4643 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:39:52.645183    4643 main.go:141] libmachine: Decoding PEM data...
	I0828 10:39:52.645193    4643 main.go:141] libmachine: Parsing certificate...
	I0828 10:39:52.645229    4643 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:39:52.645252    4643 main.go:141] libmachine: Decoding PEM data...
	I0828 10:39:52.645257    4643 main.go:141] libmachine: Parsing certificate...
	I0828 10:39:52.645564    4643 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:39:52.806367    4643 main.go:141] libmachine: Creating SSH key...
	I0828 10:39:53.319575    4643 main.go:141] libmachine: Creating Disk image...
	I0828 10:39:53.319587    4643 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:39:53.319812    4643 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2
	I0828 10:39:53.329634    4643 main.go:141] libmachine: STDOUT: 
	I0828 10:39:53.329657    4643 main.go:141] libmachine: STDERR: 
	I0828 10:39:53.329718    4643 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2 +20000M
	I0828 10:39:53.338044    4643 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:39:53.338059    4643 main.go:141] libmachine: STDERR: 
	I0828 10:39:53.338072    4643 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2
	I0828 10:39:53.338085    4643 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:39:53.338094    4643 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:39:53.338127    4643 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:bc:dc:6d:e1:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2
	I0828 10:39:53.339819    4643 main.go:141] libmachine: STDOUT: 
	I0828 10:39:53.339833    4643 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:39:53.339850    4643 client.go:171] duration metric: took 694.79475ms to LocalClient.Create
	I0828 10:39:55.340933    4643 start.go:128] duration metric: took 2.718920292s to createHost
	I0828 10:39:55.341022    4643 start.go:83] releasing machines lock for "kubernetes-upgrade-149000", held for 2.719106042s
	W0828 10:39:55.341236    4643 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-149000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-149000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:39:55.344585    4643 out.go:201] 
	W0828 10:39:55.356694    4643 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:39:55.356711    4643 out.go:270] * 
	* 
	W0828 10:39:55.358053    4643 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:39:55.370563    4643 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-149000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-149000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-149000: (3.689108917s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-149000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-149000 status --format={{.Host}}: exit status 7 (65.657459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-149000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-149000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.181530833s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-149000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-149000" primary control-plane node in "kubernetes-upgrade-149000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-149000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-149000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:39:59.168160    4680 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:39:59.168292    4680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:39:59.168296    4680 out.go:358] Setting ErrFile to fd 2...
	I0828 10:39:59.168298    4680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:39:59.168440    4680 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:39:59.169591    4680 out.go:352] Setting JSON to false
	I0828 10:39:59.185900    4680 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4163,"bootTime":1724862636,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:39:59.185972    4680 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:39:59.191201    4680 out.go:177] * [kubernetes-upgrade-149000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:39:59.198080    4680 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:39:59.198118    4680 notify.go:220] Checking for updates...
	I0828 10:39:59.206148    4680 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:39:59.209168    4680 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:39:59.213097    4680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:39:59.216200    4680 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:39:59.219110    4680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:39:59.222321    4680 config.go:182] Loaded profile config "kubernetes-upgrade-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0828 10:39:59.222579    4680 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:39:59.227118    4680 out.go:177] * Using the qemu2 driver based on existing profile
	I0828 10:39:59.234098    4680 start.go:297] selected driver: qemu2
	I0828 10:39:59.234104    4680 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-149000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:39:59.234171    4680 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:39:59.236492    4680 cni.go:84] Creating CNI manager for ""
	I0828 10:39:59.236514    4680 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:39:59.236540    4680 start.go:340] cluster config:
	{Name:kubernetes-upgrade-149000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-149000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:39:59.240272    4680 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:39:59.249134    4680 out.go:177] * Starting "kubernetes-upgrade-149000" primary control-plane node in "kubernetes-upgrade-149000" cluster
	I0828 10:39:59.253132    4680 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:39:59.253146    4680 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:39:59.253156    4680 cache.go:56] Caching tarball of preloaded images
	I0828 10:39:59.253213    4680 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:39:59.253219    4680 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:39:59.253269    4680 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/kubernetes-upgrade-149000/config.json ...
	I0828 10:39:59.253769    4680 start.go:360] acquireMachinesLock for kubernetes-upgrade-149000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:39:59.253798    4680 start.go:364] duration metric: took 22.542µs to acquireMachinesLock for "kubernetes-upgrade-149000"
	I0828 10:39:59.253808    4680 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:39:59.253817    4680 fix.go:54] fixHost starting: 
	I0828 10:39:59.253938    4680 fix.go:112] recreateIfNeeded on kubernetes-upgrade-149000: state=Stopped err=<nil>
	W0828 10:39:59.253947    4680 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:39:59.262110    4680 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-149000" ...
	I0828 10:39:59.266024    4680 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:39:59.266065    4680 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:bc:dc:6d:e1:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2
	I0828 10:39:59.268231    4680 main.go:141] libmachine: STDOUT: 
	I0828 10:39:59.268253    4680 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:39:59.268281    4680 fix.go:56] duration metric: took 14.466084ms for fixHost
	I0828 10:39:59.268287    4680 start.go:83] releasing machines lock for "kubernetes-upgrade-149000", held for 14.484375ms
	W0828 10:39:59.268295    4680 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:39:59.268336    4680 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:39:59.268340    4680 start.go:729] Will try again in 5 seconds ...
	I0828 10:40:04.268638    4680 start.go:360] acquireMachinesLock for kubernetes-upgrade-149000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:40:04.269125    4680 start.go:364] duration metric: took 355.125µs to acquireMachinesLock for "kubernetes-upgrade-149000"
	I0828 10:40:04.269239    4680 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:40:04.269253    4680 fix.go:54] fixHost starting: 
	I0828 10:40:04.269781    4680 fix.go:112] recreateIfNeeded on kubernetes-upgrade-149000: state=Stopped err=<nil>
	W0828 10:40:04.269800    4680 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:40:04.274349    4680 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-149000" ...
	I0828 10:40:04.281295    4680 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:40:04.281415    4680 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:bc:dc:6d:e1:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubernetes-upgrade-149000/disk.qcow2
	I0828 10:40:04.286282    4680 main.go:141] libmachine: STDOUT: 
	I0828 10:40:04.286326    4680 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:40:04.286400    4680 fix.go:56] duration metric: took 17.122875ms for fixHost
	I0828 10:40:04.286414    4680 start.go:83] releasing machines lock for "kubernetes-upgrade-149000", held for 17.272667ms
	W0828 10:40:04.286512    4680 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-149000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-149000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:40:04.292298    4680 out.go:201] 
	W0828 10:40:04.296289    4680 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:40:04.296301    4680 out.go:270] * 
	* 
	W0828 10:40:04.297583    4680 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:40:04.308310    4680 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-149000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-149000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-149000 version --output=json: exit status 1 (52.259041ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-149000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-28 10:40:04.372012 -0700 PDT m=+2976.300326459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-149000 -n kubernetes-upgrade-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-149000 -n kubernetes-upgrade-149000: exit status 7 (32.725417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-149000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-149000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-149000
--- FAIL: TestKubernetesUpgrade (19.30s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.37s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19529
- KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current770505047/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.37s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.02s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19529
- KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2295782146/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (577.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2256349157 start -p stopped-upgrade-801000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2256349157 start -p stopped-upgrade-801000 --memory=2200 --vm-driver=qemu2 : (41.793785125s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2256349157 -p stopped-upgrade-801000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2256349157 -p stopped-upgrade-801000 stop: (12.117349166s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-801000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0828 10:42:13.318037    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:43:50.783268    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:44:10.221488    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-801000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m43.473388791s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-801000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-801000" primary control-plane node in "stopped-upgrade-801000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-801000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:41:00.829660    4717 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:41:00.829831    4717 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:41:00.829837    4717 out.go:358] Setting ErrFile to fd 2...
	I0828 10:41:00.829840    4717 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:41:00.830001    4717 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:41:00.831304    4717 out.go:352] Setting JSON to false
	I0828 10:41:00.850876    4717 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4224,"bootTime":1724862636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:41:00.850953    4717 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:41:00.855649    4717 out.go:177] * [stopped-upgrade-801000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:41:00.863502    4717 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:41:00.863535    4717 notify.go:220] Checking for updates...
	I0828 10:41:00.869550    4717 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:41:00.872546    4717 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:41:00.875582    4717 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:41:00.878574    4717 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:41:00.881520    4717 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:41:00.884880    4717 config.go:182] Loaded profile config "stopped-upgrade-801000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0828 10:41:00.888517    4717 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0828 10:41:00.891556    4717 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:41:00.895536    4717 out.go:177] * Using the qemu2 driver based on existing profile
	I0828 10:41:00.901453    4717 start.go:297] selected driver: qemu2
	I0828 10:41:00.901461    4717 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-801000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-801000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0828 10:41:00.901523    4717 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:41:00.903732    4717 cni.go:84] Creating CNI manager for ""
	I0828 10:41:00.903751    4717 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:41:00.903770    4717 start.go:340] cluster config:
	{Name:stopped-upgrade-801000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-801000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0828 10:41:00.903817    4717 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:41:00.912539    4717 out.go:177] * Starting "stopped-upgrade-801000" primary control-plane node in "stopped-upgrade-801000" cluster
	I0828 10:41:00.916521    4717 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0828 10:41:00.916538    4717 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0828 10:41:00.916545    4717 cache.go:56] Caching tarball of preloaded images
	I0828 10:41:00.916614    4717 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:41:00.916620    4717 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0828 10:41:00.916667    4717 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/config.json ...
	I0828 10:41:00.917127    4717 start.go:360] acquireMachinesLock for stopped-upgrade-801000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:41:00.917160    4717 start.go:364] duration metric: took 27.75µs to acquireMachinesLock for "stopped-upgrade-801000"
	I0828 10:41:00.917170    4717 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:41:00.917176    4717 fix.go:54] fixHost starting: 
	I0828 10:41:00.917285    4717 fix.go:112] recreateIfNeeded on stopped-upgrade-801000: state=Stopped err=<nil>
	W0828 10:41:00.917293    4717 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:41:00.925513    4717 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-801000" ...
	I0828 10:41:00.929545    4717 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:41:00.929625    4717 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/stopped-upgrade-801000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/stopped-upgrade-801000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/stopped-upgrade-801000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50471-:22,hostfwd=tcp::50472-:2376,hostname=stopped-upgrade-801000 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/stopped-upgrade-801000/disk.qcow2
	I0828 10:41:00.976133    4717 main.go:141] libmachine: STDOUT: 
	I0828 10:41:00.976173    4717 main.go:141] libmachine: STDERR: 
	I0828 10:41:00.976179    4717 main.go:141] libmachine: Waiting for VM to start (ssh -p 50471 docker@127.0.0.1)...
	I0828 10:41:21.390153    4717 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/config.json ...
	I0828 10:41:21.390715    4717 machine.go:93] provisionDockerMachine start ...
	I0828 10:41:21.390840    4717 main.go:141] libmachine: Using SSH client type: native
	I0828 10:41:21.391203    4717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051bc5a0] 0x1051bee00 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0828 10:41:21.391213    4717 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 10:41:21.470185    4717 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 10:41:21.470217    4717 buildroot.go:166] provisioning hostname "stopped-upgrade-801000"
	I0828 10:41:21.470310    4717 main.go:141] libmachine: Using SSH client type: native
	I0828 10:41:21.470527    4717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051bc5a0] 0x1051bee00 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0828 10:41:21.470540    4717 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-801000 && echo "stopped-upgrade-801000" | sudo tee /etc/hostname
	I0828 10:41:21.551887    4717 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-801000
	
	I0828 10:41:21.551954    4717 main.go:141] libmachine: Using SSH client type: native
	I0828 10:41:21.552111    4717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051bc5a0] 0x1051bee00 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0828 10:41:21.552122    4717 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-801000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-801000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-801000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 10:41:21.623916    4717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 10:41:21.623929    4717 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19529-1176/.minikube CaCertPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19529-1176/.minikube}
	I0828 10:41:21.623938    4717 buildroot.go:174] setting up certificates
	I0828 10:41:21.623944    4717 provision.go:84] configureAuth start
	I0828 10:41:21.623952    4717 provision.go:143] copyHostCerts
	I0828 10:41:21.624041    4717 exec_runner.go:144] found /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.pem, removing ...
	I0828 10:41:21.624049    4717 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.pem
	I0828 10:41:21.624178    4717 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.pem (1078 bytes)
	I0828 10:41:21.624400    4717 exec_runner.go:144] found /Users/jenkins/minikube-integration/19529-1176/.minikube/cert.pem, removing ...
	I0828 10:41:21.624404    4717 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19529-1176/.minikube/cert.pem
	I0828 10:41:21.624466    4717 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19529-1176/.minikube/cert.pem (1123 bytes)
	I0828 10:41:21.624600    4717 exec_runner.go:144] found /Users/jenkins/minikube-integration/19529-1176/.minikube/key.pem, removing ...
	I0828 10:41:21.624604    4717 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19529-1176/.minikube/key.pem
	I0828 10:41:21.624662    4717 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19529-1176/.minikube/key.pem (1679 bytes)
	I0828 10:41:21.624773    4717 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-801000 san=[127.0.0.1 localhost minikube stopped-upgrade-801000]
	I0828 10:41:21.782020    4717 provision.go:177] copyRemoteCerts
	I0828 10:41:21.782065    4717 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 10:41:21.782074    4717 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/stopped-upgrade-801000/id_rsa Username:docker}
	I0828 10:41:21.814285    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 10:41:21.821158    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0828 10:41:21.827874    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0828 10:41:21.835083    4717 provision.go:87] duration metric: took 211.142833ms to configureAuth
	I0828 10:41:21.835092    4717 buildroot.go:189] setting minikube options for container-runtime
	I0828 10:41:21.835187    4717 config.go:182] Loaded profile config "stopped-upgrade-801000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0828 10:41:21.835225    4717 main.go:141] libmachine: Using SSH client type: native
	I0828 10:41:21.835307    4717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051bc5a0] 0x1051bee00 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0828 10:41:21.835312    4717 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0828 10:41:21.900773    4717 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0828 10:41:21.900781    4717 buildroot.go:70] root file system type: tmpfs
	I0828 10:41:21.900831    4717 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0828 10:41:21.900876    4717 main.go:141] libmachine: Using SSH client type: native
	I0828 10:41:21.900994    4717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051bc5a0] 0x1051bee00 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0828 10:41:21.901028    4717 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0828 10:41:21.964593    4717 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0828 10:41:21.964650    4717 main.go:141] libmachine: Using SSH client type: native
	I0828 10:41:21.964758    4717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051bc5a0] 0x1051bee00 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0828 10:41:21.964768    4717 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0828 10:41:22.339436    4717 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0828 10:41:22.339449    4717 machine.go:96] duration metric: took 948.758334ms to provisionDockerMachine
	I0828 10:41:22.339463    4717 start.go:293] postStartSetup for "stopped-upgrade-801000" (driver="qemu2")
	I0828 10:41:22.339470    4717 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 10:41:22.339523    4717 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 10:41:22.339531    4717 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/stopped-upgrade-801000/id_rsa Username:docker}
	I0828 10:41:22.374510    4717 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 10:41:22.375895    4717 info.go:137] Remote host: Buildroot 2021.02.12
	I0828 10:41:22.375903    4717 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19529-1176/.minikube/addons for local assets ...
	I0828 10:41:22.375987    4717 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19529-1176/.minikube/files for local assets ...
	I0828 10:41:22.376102    4717 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19529-1176/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0828 10:41:22.376235    4717 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 10:41:22.379366    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0828 10:41:22.386532    4717 start.go:296] duration metric: took 47.065042ms for postStartSetup
	I0828 10:41:22.386547    4717 fix.go:56] duration metric: took 21.470148417s for fixHost
	I0828 10:41:22.386582    4717 main.go:141] libmachine: Using SSH client type: native
	I0828 10:41:22.386692    4717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051bc5a0] 0x1051bee00 <nil>  [] 0s} localhost 50471 <nil> <nil>}
	I0828 10:41:22.386697    4717 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 10:41:22.448559    4717 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724866882.709622421
	
	I0828 10:41:22.448568    4717 fix.go:216] guest clock: 1724866882.709622421
	I0828 10:41:22.448572    4717 fix.go:229] Guest: 2024-08-28 10:41:22.709622421 -0700 PDT Remote: 2024-08-28 10:41:22.386548 -0700 PDT m=+21.587789126 (delta=323.074421ms)
	I0828 10:41:22.448584    4717 fix.go:200] guest clock delta is within tolerance: 323.074421ms
	I0828 10:41:22.448587    4717 start.go:83] releasing machines lock for "stopped-upgrade-801000", held for 21.532199042s
	I0828 10:41:22.448659    4717 ssh_runner.go:195] Run: cat /version.json
	I0828 10:41:22.448670    4717 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/stopped-upgrade-801000/id_rsa Username:docker}
	I0828 10:41:22.448659    4717 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 10:41:22.448704    4717 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/stopped-upgrade-801000/id_rsa Username:docker}
	W0828 10:41:22.449269    4717 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50471: connect: connection refused
	I0828 10:41:22.449289    4717 retry.go:31] will retry after 372.152083ms: dial tcp [::1]:50471: connect: connection refused
	W0828 10:41:22.480614    4717 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0828 10:41:22.480674    4717 ssh_runner.go:195] Run: systemctl --version
	I0828 10:41:22.482384    4717 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 10:41:22.483922    4717 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 10:41:22.483947    4717 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0828 10:41:22.486999    4717 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0828 10:41:22.491630    4717 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 10:41:22.491638    4717 start.go:495] detecting cgroup driver to use...
	I0828 10:41:22.491718    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 10:41:22.498352    4717 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0828 10:41:22.501202    4717 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0828 10:41:22.503909    4717 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0828 10:41:22.503935    4717 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0828 10:41:22.507256    4717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0828 10:41:22.510659    4717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0828 10:41:22.513573    4717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0828 10:41:22.516291    4717 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 10:41:22.519510    4717 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0828 10:41:22.522742    4717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0828 10:41:22.525870    4717 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0828 10:41:22.528836    4717 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 10:41:22.531577    4717 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 10:41:22.534862    4717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:41:22.617438    4717 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0828 10:41:22.625221    4717 start.go:495] detecting cgroup driver to use...
	I0828 10:41:22.625302    4717 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0828 10:41:22.630649    4717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 10:41:22.635465    4717 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 10:41:22.642382    4717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 10:41:22.646873    4717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0828 10:41:22.651385    4717 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0828 10:41:22.709206    4717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0828 10:41:22.714133    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 10:41:22.719262    4717 ssh_runner.go:195] Run: which cri-dockerd
	I0828 10:41:22.720661    4717 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0828 10:41:22.723470    4717 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0828 10:41:22.728929    4717 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0828 10:41:22.808878    4717 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0828 10:41:22.890003    4717 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0828 10:41:22.890060    4717 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0828 10:41:22.895505    4717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:41:22.977653    4717 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0828 10:41:24.132989    4717 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.155361584s)
	I0828 10:41:24.133063    4717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0828 10:41:24.137668    4717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0828 10:41:24.142112    4717 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0828 10:41:24.217403    4717 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0828 10:41:24.286660    4717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:41:24.362773    4717 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0828 10:41:24.368819    4717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0828 10:41:24.373187    4717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:41:24.449029    4717 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0828 10:41:24.487318    4717 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0828 10:41:24.487408    4717 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0828 10:41:24.489546    4717 start.go:563] Will wait 60s for crictl version
	I0828 10:41:24.489600    4717 ssh_runner.go:195] Run: which crictl
	I0828 10:41:24.491118    4717 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 10:41:24.505908    4717 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0828 10:41:24.505972    4717 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0828 10:41:24.522225    4717 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0828 10:41:24.543949    4717 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0828 10:41:24.544063    4717 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0828 10:41:24.545436    4717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 10:41:24.548896    4717 kubeadm.go:883] updating cluster {Name:stopped-upgrade-801000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-801000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0828 10:41:24.548937    4717 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0828 10:41:24.548973    4717 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0828 10:41:24.559328    4717 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0828 10:41:24.559339    4717 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0828 10:41:24.559385    4717 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0828 10:41:24.562937    4717 ssh_runner.go:195] Run: which lz4
	I0828 10:41:24.564150    4717 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 10:41:24.565446    4717 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 10:41:24.565455    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0828 10:41:25.487625    4717 docker.go:649] duration metric: took 923.534291ms to copy over tarball
	I0828 10:41:25.487679    4717 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 10:41:26.638994    4717 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.15134275s)
	I0828 10:41:26.639008    4717 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 10:41:26.654798    4717 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0828 10:41:26.657957    4717 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0828 10:41:26.663043    4717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:41:26.750163    4717 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0828 10:41:28.332505    4717 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.582381875s)
	I0828 10:41:28.332585    4717 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0828 10:41:28.353088    4717 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0828 10:41:28.353098    4717 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0828 10:41:28.353104    4717 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0828 10:41:28.357083    4717 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 10:41:28.358713    4717 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0828 10:41:28.360983    4717 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 10:41:28.361194    4717 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0828 10:41:28.363391    4717 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0828 10:41:28.363555    4717 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0828 10:41:28.365477    4717 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0828 10:41:28.365477    4717 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0828 10:41:28.366860    4717 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0828 10:41:28.366966    4717 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0828 10:41:28.368762    4717 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0828 10:41:28.369128    4717 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0828 10:41:28.369790    4717 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0828 10:41:28.369799    4717 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0828 10:41:28.370570    4717 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0828 10:41:28.371085    4717 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	W0828 10:41:29.368069    4717 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0828 10:41:29.368199    4717 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0828 10:41:29.379814    4717 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0828 10:41:29.379845    4717 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0828 10:41:29.379892    4717 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0828 10:41:29.390775    4717 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0828 10:41:29.390893    4717 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0828 10:41:29.393396    4717 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0828 10:41:29.393411    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0828 10:41:29.404635    4717 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0828 10:41:29.416528    4717 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0828 10:41:29.419686    4717 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0828 10:41:29.431822    4717 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0828 10:41:29.431844    4717 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0828 10:41:29.431895    4717 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0828 10:41:29.447523    4717 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0828 10:41:29.447538    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0828 10:41:29.450705    4717 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0828 10:41:29.450732    4717 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0828 10:41:29.450786    4717 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0828 10:41:29.460614    4717 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0828 10:41:29.460634    4717 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0828 10:41:29.460637    4717 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0828 10:41:29.460691    4717 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0828 10:41:29.503624    4717 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0828 10:41:29.503671    4717 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0828 10:41:29.503697    4717 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0828 10:41:29.503789    4717 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0828 10:41:29.505170    4717 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0828 10:41:29.505179    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0828 10:41:29.512390    4717 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0828 10:41:29.512398    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0828 10:41:29.528649    4717 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0828 10:41:29.528750    4717 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 10:41:29.545332    4717 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0828 10:41:29.545363    4717 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0828 10:41:29.545379    4717 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 10:41:29.545429    4717 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 10:41:29.558561    4717 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0828 10:41:29.558679    4717 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0828 10:41:29.560089    4717 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0828 10:41:29.560103    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0828 10:41:29.589627    4717 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0828 10:41:29.591044    4717 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0828 10:41:29.591053    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0828 10:41:29.591445    4717 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0828 10:41:29.597885    4717 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0828 10:41:29.615246    4717 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0828 10:41:29.615269    4717 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0828 10:41:29.615337    4717 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0828 10:41:29.847800    4717 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0828 10:41:29.847824    4717 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0828 10:41:29.847850    4717 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0828 10:41:29.847864    4717 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0828 10:41:29.847883    4717 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0828 10:41:29.847910    4717 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0828 10:41:29.847910    4717 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0828 10:41:29.847950    4717 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0828 10:41:29.861164    4717 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0828 10:41:29.861166    4717 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0828 10:41:29.861222    4717 cache_images.go:92] duration metric: took 1.508166292s to LoadCachedImages
	W0828 10:41:29.861265    4717 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0828 10:41:29.861270    4717 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0828 10:41:29.861319    4717 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-801000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-801000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 10:41:29.861376    4717 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0828 10:41:29.874642    4717 cni.go:84] Creating CNI manager for ""
	I0828 10:41:29.874654    4717 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:41:29.874660    4717 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 10:41:29.874668    4717 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-801000 NodeName:stopped-upgrade-801000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 10:41:29.874736    4717 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-801000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 10:41:29.874787    4717 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0828 10:41:29.878332    4717 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 10:41:29.878360    4717 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 10:41:29.881218    4717 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0828 10:41:29.885994    4717 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 10:41:29.890802    4717 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0828 10:41:29.896009    4717 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0828 10:41:29.897245    4717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 10:41:29.900578    4717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:41:29.967952    4717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 10:41:29.973453    4717 certs.go:68] Setting up /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000 for IP: 10.0.2.15
	I0828 10:41:29.973461    4717 certs.go:194] generating shared ca certs ...
	I0828 10:41:29.973470    4717 certs.go:226] acquiring lock for ca certs: {Name:mkf861e7f19b199967d33246b8c25f60e0670f76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:41:29.973639    4717 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.key
	I0828 10:41:29.973688    4717 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/proxy-client-ca.key
	I0828 10:41:29.973694    4717 certs.go:256] generating profile certs ...
	I0828 10:41:29.973767    4717 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/client.key
	I0828 10:41:29.973784    4717 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/apiserver.key.d629ac91
	I0828 10:41:29.973799    4717 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/apiserver.crt.d629ac91 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0828 10:41:30.071317    4717 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/apiserver.crt.d629ac91 ...
	I0828 10:41:30.071335    4717 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/apiserver.crt.d629ac91: {Name:mk5decf942ff473ed05904e6bec266e199df58a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:41:30.071892    4717 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/apiserver.key.d629ac91 ...
	I0828 10:41:30.071902    4717 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/apiserver.key.d629ac91: {Name:mk61461cb5d4384e962aa64d28f518bdcf88010d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:41:30.072048    4717 certs.go:381] copying /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/apiserver.crt.d629ac91 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/apiserver.crt
	I0828 10:41:30.072200    4717 certs.go:385] copying /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/apiserver.key.d629ac91 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/apiserver.key
	I0828 10:41:30.072365    4717 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/proxy-client.key
	I0828 10:41:30.072506    4717 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/1678.pem (1338 bytes)
	W0828 10:41:30.072534    4717 certs.go:480] ignoring /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/1678_empty.pem, impossibly tiny 0 bytes
	I0828 10:41:30.072539    4717 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 10:41:30.072565    4717 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem (1078 bytes)
	I0828 10:41:30.072592    4717 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem (1123 bytes)
	I0828 10:41:30.072616    4717 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/key.pem (1679 bytes)
	I0828 10:41:30.072668    4717 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1176/.minikube/files/etc/ssl/certs/16782.pem (1708 bytes)
	I0828 10:41:30.073042    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 10:41:30.079902    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 10:41:30.087325    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 10:41:30.095024    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 10:41:30.102367    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0828 10:41:30.109532    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0828 10:41:30.116412    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 10:41:30.123427    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 10:41:30.130780    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/1678.pem --> /usr/share/ca-certificates/1678.pem (1338 bytes)
	I0828 10:41:30.137902    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/files/etc/ssl/certs/16782.pem --> /usr/share/ca-certificates/16782.pem (1708 bytes)
	I0828 10:41:30.144473    4717 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1176/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 10:41:30.151263    4717 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 10:41:30.156341    4717 ssh_runner.go:195] Run: openssl version
	I0828 10:41:30.158336    4717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 10:41:30.161247    4717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 10:41:30.162754    4717 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:51 /usr/share/ca-certificates/minikubeCA.pem
	I0828 10:41:30.162778    4717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 10:41:30.164540    4717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 10:41:30.167522    4717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1678.pem && ln -fs /usr/share/ca-certificates/1678.pem /etc/ssl/certs/1678.pem"
	I0828 10:41:30.170810    4717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1678.pem
	I0828 10:41:30.172244    4717 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:06 /usr/share/ca-certificates/1678.pem
	I0828 10:41:30.172266    4717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1678.pem
	I0828 10:41:30.173950    4717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1678.pem /etc/ssl/certs/51391683.0"
	I0828 10:41:30.176704    4717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16782.pem && ln -fs /usr/share/ca-certificates/16782.pem /etc/ssl/certs/16782.pem"
	I0828 10:41:30.179692    4717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16782.pem
	I0828 10:41:30.181170    4717 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:06 /usr/share/ca-certificates/16782.pem
	I0828 10:41:30.181195    4717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16782.pem
	I0828 10:41:30.182908    4717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16782.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 10:41:30.186151    4717 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 10:41:30.187501    4717 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 10:41:30.189345    4717 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 10:41:30.191084    4717 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 10:41:30.192900    4717 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 10:41:30.194644    4717 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 10:41:30.196492    4717 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 10:41:30.198213    4717 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-801000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-801000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0828 10:41:30.198289    4717 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0828 10:41:30.208756    4717 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 10:41:30.211948    4717 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 10:41:30.211955    4717 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 10:41:30.211995    4717 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 10:41:30.215910    4717 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 10:41:30.216229    4717 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-801000" does not appear in /Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:41:30.216329    4717 kubeconfig.go:62] /Users/jenkins/minikube-integration/19529-1176/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-801000" cluster setting kubeconfig missing "stopped-upgrade-801000" context setting]
	I0828 10:41:30.216526    4717 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/kubeconfig: {Name:mke8b729c65a2ae9e4d9042dc78e2127479f8609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:41:30.216990    4717 kapi.go:59] client config for stopped-upgrade-801000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/client.key", CAFile:"/Users/jenkins/minikube-integration/19529-1176/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106777eb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0828 10:41:30.217312    4717 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 10:41:30.220077    4717 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-801000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0828 10:41:30.220083    4717 kubeadm.go:1160] stopping kube-system containers ...
	I0828 10:41:30.220123    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0828 10:41:30.233447    4717 docker.go:483] Stopping containers: [57615586b5d3 3527382822a3 37d0386da62f f04951a7c514 d8ab8c596fcc 747a7191149c caabf38006b1 657511b584fb]
	I0828 10:41:30.233513    4717 ssh_runner.go:195] Run: docker stop 57615586b5d3 3527382822a3 37d0386da62f f04951a7c514 d8ab8c596fcc 747a7191149c caabf38006b1 657511b584fb
	I0828 10:41:30.243821    4717 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 10:41:30.249654    4717 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 10:41:30.252386    4717 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 10:41:30.252392    4717 kubeadm.go:157] found existing configuration files:
	
	I0828 10:41:30.252415    4717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf
	I0828 10:41:30.255241    4717 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 10:41:30.255270    4717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 10:41:30.258239    4717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf
	I0828 10:41:30.260764    4717 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 10:41:30.260785    4717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 10:41:30.263573    4717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf
	I0828 10:41:30.266834    4717 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 10:41:30.266859    4717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 10:41:30.269851    4717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf
	I0828 10:41:30.272184    4717 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 10:41:30.272207    4717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 10:41:30.275155    4717 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 10:41:30.278069    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 10:41:30.300117    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 10:41:30.829241    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 10:41:30.960304    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 10:41:30.994260    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 10:41:31.013773    4717 api_server.go:52] waiting for apiserver process to appear ...
	I0828 10:41:31.013865    4717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 10:41:31.515903    4717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 10:41:32.015884    4717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 10:41:32.515871    4717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 10:41:32.520124    4717 api_server.go:72] duration metric: took 1.506407875s to wait for apiserver process to appear ...
	I0828 10:41:32.520135    4717 api_server.go:88] waiting for apiserver healthz status ...
	I0828 10:41:32.520145    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:37.522049    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:37.522140    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:42.522264    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:42.522337    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:47.522967    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:47.522990    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:52.523354    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:52.523414    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:41:57.523894    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:41:57.523934    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:02.524596    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:02.524620    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:07.526253    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:07.526282    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:12.527808    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:12.527877    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:17.530276    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:17.530335    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:22.532511    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:22.532560    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:27.534736    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:27.534786    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:32.536999    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:32.537261    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:42:32.563346    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:42:32.563442    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:42:32.578340    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:42:32.578417    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:42:32.590643    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:42:32.590716    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:42:32.602699    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:42:32.602767    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:42:32.617464    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:42:32.617548    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:42:32.628036    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:42:32.628103    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:42:32.639199    4717 logs.go:276] 0 containers: []
	W0828 10:42:32.639210    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:42:32.639271    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:42:32.649841    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:42:32.649869    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:42:32.649874    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:42:32.688198    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:42:32.688209    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:42:32.771459    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:42:32.771474    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:42:32.783856    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:42:32.783876    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:42:32.798243    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:42:32.798257    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:42:32.809085    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:42:32.809095    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:42:32.820457    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:42:32.820469    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:42:32.862971    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:42:32.862982    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:42:32.874412    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:42:32.874423    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:42:32.889594    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:42:32.889605    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:42:32.906729    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:42:32.906740    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:42:32.918649    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:42:32.918665    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:42:32.931166    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:42:32.931178    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:42:32.935810    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:42:32.935819    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:42:32.950037    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:42:32.950047    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:42:32.967206    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:42:32.967217    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:42:35.494966    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:40.497321    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:40.497663    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:42:40.529740    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:42:40.529867    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:42:40.552364    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:42:40.552449    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:42:40.566771    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:42:40.566859    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:42:40.578205    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:42:40.578284    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:42:40.592075    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:42:40.592145    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:42:40.603429    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:42:40.603491    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:42:40.613361    4717 logs.go:276] 0 containers: []
	W0828 10:42:40.613371    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:42:40.613428    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:42:40.624016    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:42:40.624037    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:42:40.624043    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:42:40.638744    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:42:40.638755    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:42:40.650634    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:42:40.650648    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:42:40.687167    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:42:40.687174    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:42:40.706409    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:42:40.706420    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:42:40.718428    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:42:40.718439    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:42:40.730010    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:42:40.730021    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:42:40.773799    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:42:40.773811    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:42:40.791138    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:42:40.791149    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:42:40.806654    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:42:40.806666    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:42:40.831308    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:42:40.831319    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:42:40.847452    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:42:40.847463    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:42:40.851563    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:42:40.851570    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:42:40.889020    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:42:40.889030    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:42:40.903307    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:42:40.903322    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:42:40.917924    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:42:40.917933    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:42:43.432274    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:48.434745    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:48.434976    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:42:48.454880    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:42:48.454976    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:42:48.473350    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:42:48.473426    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:42:48.484505    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:42:48.484584    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:42:48.494950    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:42:48.495024    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:42:48.509552    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:42:48.509619    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:42:48.520210    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:42:48.520275    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:42:48.531540    4717 logs.go:276] 0 containers: []
	W0828 10:42:48.531552    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:42:48.531609    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:42:48.542208    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:42:48.542227    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:42:48.542233    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:42:48.579108    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:42:48.579116    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:42:48.636100    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:42:48.636113    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:42:48.656463    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:42:48.656476    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:42:48.667089    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:42:48.667100    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:42:48.682430    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:42:48.682441    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:42:48.694748    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:42:48.694762    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:42:48.706782    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:42:48.706794    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:42:48.724983    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:42:48.724995    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:42:48.736666    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:42:48.736682    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:42:48.740879    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:42:48.740884    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:42:48.756180    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:42:48.756192    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:42:48.794753    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:42:48.794764    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:42:48.806442    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:42:48.806452    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:42:48.824048    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:42:48.824063    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:42:48.849416    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:42:48.849429    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:42:51.369440    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:42:56.371653    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:42:56.371858    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:42:56.404075    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:42:56.404224    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:42:56.422922    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:42:56.423012    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:42:56.437482    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:42:56.437557    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:42:56.449454    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:42:56.449527    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:42:56.460690    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:42:56.460763    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:42:56.472847    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:42:56.472917    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:42:56.483573    4717 logs.go:276] 0 containers: []
	W0828 10:42:56.483588    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:42:56.483646    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:42:56.494531    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:42:56.494547    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:42:56.494553    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:42:56.532672    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:42:56.532686    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:42:56.550143    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:42:56.550153    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:42:56.562065    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:42:56.562076    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:42:56.586323    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:42:56.586333    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:42:56.621012    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:42:56.621025    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:42:56.635637    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:42:56.635649    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:42:56.647113    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:42:56.647124    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:42:56.665080    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:42:56.665091    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:42:56.685857    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:42:56.685872    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:42:56.697245    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:42:56.697255    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:42:56.712814    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:42:56.712826    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:42:56.724608    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:42:56.724620    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:42:56.762496    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:42:56.762505    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:42:56.766808    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:42:56.766815    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:42:56.781266    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:42:56.781278    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:42:59.297478    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:04.299775    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:04.300124    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:43:04.334051    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:43:04.334182    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:43:04.351652    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:43:04.351737    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:43:04.365372    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:43:04.365452    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:43:04.377221    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:43:04.377284    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:43:04.387829    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:43:04.387894    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:43:04.398855    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:43:04.398923    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:43:04.408926    4717 logs.go:276] 0 containers: []
	W0828 10:43:04.408941    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:43:04.409001    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:43:04.419378    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:43:04.419394    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:43:04.419399    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:43:04.458010    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:43:04.458020    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:43:04.469911    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:43:04.469922    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:43:04.484485    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:43:04.484495    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:43:04.496761    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:43:04.496772    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:43:04.508686    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:43:04.508697    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:43:04.533942    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:43:04.533954    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:43:04.546936    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:43:04.546948    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:43:04.562036    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:43:04.562048    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:43:04.576118    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:43:04.576128    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:43:04.587605    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:43:04.587616    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:43:04.592264    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:43:04.592271    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:43:04.630818    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:43:04.630830    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:43:04.642654    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:43:04.642664    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:43:04.657571    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:43:04.657584    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:43:04.675988    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:43:04.676001    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:43:07.216485    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:12.218982    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:12.219450    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:43:12.256975    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:43:12.257109    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:43:12.278329    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:43:12.278428    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:43:12.293143    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:43:12.293223    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:43:12.305227    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:43:12.305305    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:43:12.316334    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:43:12.316395    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:43:12.327261    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:43:12.327332    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:43:12.337642    4717 logs.go:276] 0 containers: []
	W0828 10:43:12.337653    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:43:12.337707    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:43:12.353192    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:43:12.353209    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:43:12.353215    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:43:12.367468    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:43:12.367478    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:43:12.382268    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:43:12.382279    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:43:12.396772    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:43:12.396782    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:43:12.408311    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:43:12.408322    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:43:12.425624    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:43:12.425634    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:43:12.437321    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:43:12.437332    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:43:12.448691    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:43:12.448705    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:43:12.484442    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:43:12.484451    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:43:12.495588    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:43:12.495598    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:43:12.533560    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:43:12.533573    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:43:12.547260    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:43:12.547271    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:43:12.561543    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:43:12.561554    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:43:12.565735    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:43:12.565743    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:43:12.600280    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:43:12.600297    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:43:12.613789    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:43:12.613805    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:43:15.139466    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:20.141597    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:20.141740    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:43:20.157266    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:43:20.157340    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:43:20.167458    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:43:20.167530    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:43:20.183289    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:43:20.183354    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:43:20.193591    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:43:20.193667    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:43:20.203997    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:43:20.204066    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:43:20.215057    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:43:20.215126    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:43:20.225416    4717 logs.go:276] 0 containers: []
	W0828 10:43:20.225426    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:43:20.225480    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:43:20.235425    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:43:20.235443    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:43:20.235448    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:43:20.246153    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:43:20.246167    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:43:20.257750    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:43:20.257759    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:43:20.269492    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:43:20.269504    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:43:20.305899    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:43:20.305907    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:43:20.343086    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:43:20.343100    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:43:20.357257    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:43:20.357268    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:43:20.368561    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:43:20.368574    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:43:20.372496    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:43:20.372504    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:43:20.393937    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:43:20.393950    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:43:20.405570    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:43:20.405581    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:43:20.423337    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:43:20.423351    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:43:20.435461    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:43:20.435472    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:43:20.459058    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:43:20.459066    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:43:20.495699    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:43:20.495708    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:43:20.514201    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:43:20.514211    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:43:23.037415    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:28.039831    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:28.040068    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:43:28.059913    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:43:28.060005    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:43:28.074605    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:43:28.074683    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:43:28.091285    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:43:28.091354    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:43:28.101751    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:43:28.101823    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:43:28.112797    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:43:28.112868    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:43:28.124205    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:43:28.124278    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:43:28.134876    4717 logs.go:276] 0 containers: []
	W0828 10:43:28.134888    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:43:28.134947    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:43:28.146334    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:43:28.146354    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:43:28.146360    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:43:28.189673    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:43:28.189688    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:43:28.204926    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:43:28.204937    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:43:28.216537    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:43:28.216550    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:43:28.227925    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:43:28.227935    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:43:28.252235    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:43:28.252243    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:43:28.288485    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:43:28.288494    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:43:28.302551    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:43:28.302564    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:43:28.313549    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:43:28.313561    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:43:28.317781    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:43:28.317790    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:43:28.329786    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:43:28.329797    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:43:28.341075    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:43:28.341085    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:43:28.355514    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:43:28.355523    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:43:28.393186    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:43:28.393196    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:43:28.407001    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:43:28.407011    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:43:28.418620    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:43:28.418631    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:43:30.938714    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:35.940508    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:35.940698    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:43:35.967120    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:43:35.967216    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:43:35.981183    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:43:35.981265    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:43:35.997039    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:43:35.997110    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:43:36.007451    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:43:36.007515    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:43:36.018214    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:43:36.018271    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:43:36.031823    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:43:36.031894    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:43:36.042080    4717 logs.go:276] 0 containers: []
	W0828 10:43:36.042092    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:43:36.042150    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:43:36.052901    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:43:36.052919    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:43:36.052926    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:43:36.064709    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:43:36.064720    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:43:36.070380    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:43:36.070388    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:43:36.108506    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:43:36.108517    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:43:36.122443    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:43:36.122458    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:43:36.147724    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:43:36.147738    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:43:36.167293    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:43:36.167305    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:43:36.192112    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:43:36.192123    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:43:36.203335    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:43:36.203344    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:43:36.215398    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:43:36.215410    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:43:36.238834    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:43:36.238842    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:43:36.253007    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:43:36.253020    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:43:36.269802    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:43:36.269814    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:43:36.306069    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:43:36.306077    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:43:36.340871    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:43:36.340881    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:43:36.352879    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:43:36.352890    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:43:38.864982    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:43.867114    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:43.867530    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:43:43.909269    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:43:43.909409    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:43:43.930495    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:43:43.930594    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:43:43.946688    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:43:43.946767    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:43:43.964341    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:43:43.964422    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:43:43.974972    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:43:43.975037    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:43:43.985478    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:43:43.985550    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:43:43.996290    4717 logs.go:276] 0 containers: []
	W0828 10:43:43.996302    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:43:43.996361    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:43:44.007115    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:43:44.007135    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:43:44.007141    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:43:44.046840    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:43:44.046852    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:43:44.063789    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:43:44.063799    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:43:44.075440    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:43:44.075454    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:43:44.092930    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:43:44.092941    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:43:44.097084    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:43:44.097090    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:43:44.135586    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:43:44.135597    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:43:44.147157    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:43:44.147169    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:43:44.171209    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:43:44.171221    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:43:44.182886    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:43:44.182901    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:43:44.194646    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:43:44.194658    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:43:44.210202    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:43:44.210228    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:43:44.223543    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:43:44.223557    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:43:44.240939    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:43:44.240950    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:43:44.258563    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:43:44.258577    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:43:44.297331    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:43:44.297341    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:43:46.813622    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:51.815864    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:51.816082    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:43:51.836214    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:43:51.836312    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:43:51.850780    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:43:51.850866    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:43:51.863045    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:43:51.863121    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:43:51.874369    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:43:51.874443    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:43:51.884557    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:43:51.884620    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:43:51.894808    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:43:51.894876    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:43:51.905641    4717 logs.go:276] 0 containers: []
	W0828 10:43:51.905653    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:43:51.905710    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:43:51.917143    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:43:51.917160    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:43:51.917165    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:43:51.931563    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:43:51.931574    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:43:51.943728    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:43:51.943739    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:43:51.967380    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:43:51.967391    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:43:51.971790    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:43:51.971797    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:43:52.009840    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:43:52.009851    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:43:52.020735    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:43:52.020745    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:43:52.037499    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:43:52.037513    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:43:52.056299    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:43:52.056310    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:43:52.074591    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:43:52.074602    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:43:52.086431    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:43:52.086443    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:43:52.122794    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:43:52.122802    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:43:52.158345    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:43:52.158358    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:43:52.172574    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:43:52.172584    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:43:52.191311    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:43:52.191325    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:43:52.202354    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:43:52.202364    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:43:54.714351    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:43:59.716557    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:43:59.716806    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:43:59.738457    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:43:59.738559    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:43:59.756097    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:43:59.756172    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:43:59.767847    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:43:59.767917    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:43:59.784678    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:43:59.784747    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:43:59.795469    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:43:59.795537    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:43:59.806006    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:43:59.806076    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:43:59.816698    4717 logs.go:276] 0 containers: []
	W0828 10:43:59.816710    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:43:59.816765    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:43:59.827602    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:43:59.827621    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:43:59.827627    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:43:59.861183    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:43:59.861194    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:43:59.875315    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:43:59.875327    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:43:59.886518    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:43:59.886527    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:43:59.923424    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:43:59.923436    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:43:59.962451    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:43:59.962469    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:43:59.974685    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:43:59.974700    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:43:59.989298    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:43:59.989312    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:44:00.003694    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:44:00.003704    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:44:00.015852    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:44:00.015864    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:44:00.031071    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:44:00.031083    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:44:00.048884    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:44:00.048894    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:44:00.060706    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:44:00.060716    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:44:00.084476    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:44:00.084496    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:44:00.088908    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:44:00.088916    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:44:00.102867    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:44:00.102878    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:44:02.616804    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:44:07.618155    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:44:07.618270    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:44:07.630363    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:44:07.630445    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:44:07.640795    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:44:07.640858    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:44:07.650751    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:44:07.650820    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:44:07.661075    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:44:07.661146    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:44:07.671578    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:44:07.671641    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:44:07.682388    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:44:07.682459    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:44:07.692630    4717 logs.go:276] 0 containers: []
	W0828 10:44:07.692640    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:44:07.692695    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:44:07.703507    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:44:07.703529    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:44:07.703535    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:44:07.717521    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:44:07.717531    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:44:07.728662    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:44:07.728672    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:44:07.744571    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:44:07.744582    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:44:07.778274    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:44:07.778286    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:44:07.792948    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:44:07.792961    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:44:07.816058    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:44:07.816075    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:44:07.828370    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:44:07.828384    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:44:07.840082    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:44:07.840092    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:44:07.882474    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:44:07.882486    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:44:07.896200    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:44:07.896213    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:44:07.914997    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:44:07.915009    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:44:07.951775    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:44:07.951785    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:44:07.970988    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:44:07.970999    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:44:07.982283    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:44:07.982294    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:44:07.993753    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:44:07.993768    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:44:10.500652    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:44:15.504839    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:44:15.504964    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:44:15.519400    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:44:15.519474    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:44:15.531001    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:44:15.531061    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:44:15.541653    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:44:15.541713    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:44:15.552259    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:44:15.552336    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:44:15.562824    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:44:15.562894    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:44:15.574192    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:44:15.574265    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:44:15.585222    4717 logs.go:276] 0 containers: []
	W0828 10:44:15.585235    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:44:15.585298    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:44:15.596336    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:44:15.596354    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:44:15.596360    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:44:15.610821    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:44:15.610832    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:44:15.622803    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:44:15.622814    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:44:15.634717    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:44:15.634730    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:44:15.639399    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:44:15.639407    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:44:15.682398    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:44:15.682408    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:44:15.694525    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:44:15.694535    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:44:15.706721    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:44:15.706731    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:44:15.745079    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:44:15.745089    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:44:15.783985    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:44:15.783995    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:44:15.798309    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:44:15.798319    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:44:15.809377    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:44:15.809388    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:44:15.821156    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:44:15.821166    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:44:15.839807    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:44:15.839821    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:44:15.864060    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:44:15.864078    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:44:15.878573    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:44:15.878588    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:44:18.396215    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:44:23.400098    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:44:23.400284    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:44:23.415624    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:44:23.415704    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:44:23.429472    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:44:23.429549    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:44:23.440297    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:44:23.440361    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:44:23.451442    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:44:23.451515    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:44:23.462420    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:44:23.462489    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:44:23.473411    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:44:23.473475    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:44:23.484165    4717 logs.go:276] 0 containers: []
	W0828 10:44:23.484176    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:44:23.484233    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:44:23.494248    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:44:23.494268    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:44:23.494274    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:44:23.506643    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:44:23.506654    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:44:23.518693    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:44:23.518707    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:44:23.536649    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:44:23.536660    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:44:23.560766    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:44:23.560776    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:44:23.595587    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:44:23.595600    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:44:23.611320    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:44:23.611332    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:44:23.622780    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:44:23.622792    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:44:23.659848    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:44:23.659856    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:44:23.698226    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:44:23.698237    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:44:23.711758    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:44:23.711769    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:44:23.723014    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:44:23.723025    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:44:23.727390    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:44:23.727397    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:44:23.744207    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:44:23.744217    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:44:23.760014    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:44:23.760025    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:44:23.774098    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:44:23.774112    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:44:26.294175    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:44:31.297159    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:44:31.297414    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:44:31.327875    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:44:31.327963    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:44:31.343324    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:44:31.343402    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:44:31.360665    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:44:31.360744    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:44:31.371377    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:44:31.371449    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:44:31.381655    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:44:31.381719    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:44:31.391852    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:44:31.391916    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:44:31.402089    4717 logs.go:276] 0 containers: []
	W0828 10:44:31.402101    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:44:31.402157    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:44:31.412431    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:44:31.412449    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:44:31.412454    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:44:31.435755    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:44:31.435763    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:44:31.449657    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:44:31.449667    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:44:31.465707    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:44:31.465718    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:44:31.477131    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:44:31.477141    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:44:31.493619    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:44:31.493632    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:44:31.497590    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:44:31.497599    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:44:31.516109    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:44:31.516120    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:44:31.531115    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:44:31.531128    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:44:31.542712    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:44:31.542722    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:44:31.554370    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:44:31.554381    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:44:31.592957    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:44:31.592970    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:44:31.633573    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:44:31.633587    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:44:31.645757    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:44:31.645769    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:44:31.667161    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:44:31.667171    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:44:31.701538    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:44:31.701553    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:44:34.213383    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:44:39.215990    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:44:39.216353    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:44:39.248595    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:44:39.248732    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:44:39.267531    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:44:39.267621    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:44:39.281833    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:44:39.281910    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:44:39.293655    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:44:39.293729    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:44:39.304701    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:44:39.304767    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:44:39.315182    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:44:39.315246    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:44:39.325740    4717 logs.go:276] 0 containers: []
	W0828 10:44:39.325753    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:44:39.325825    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:44:39.338150    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:44:39.338167    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:44:39.338172    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:44:39.349598    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:44:39.349608    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:44:39.361843    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:44:39.361853    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:44:39.373292    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:44:39.373301    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:44:39.409890    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:44:39.409908    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:44:39.414143    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:44:39.414152    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:44:39.450248    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:44:39.450261    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:44:39.464439    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:44:39.464451    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:44:39.478355    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:44:39.478368    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:44:39.492907    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:44:39.492918    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:44:39.504672    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:44:39.504682    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:44:39.525378    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:44:39.525389    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:44:39.548074    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:44:39.548082    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:44:39.586359    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:44:39.586373    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:44:39.601678    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:44:39.601689    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:44:39.613535    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:44:39.613546    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:44:42.127131    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:44:47.129666    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:44:47.129826    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:44:47.149409    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:44:47.149492    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:44:47.160406    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:44:47.160479    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:44:47.170922    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:44:47.170997    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:44:47.182283    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:44:47.182350    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:44:47.192516    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:44:47.192577    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:44:47.203370    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:44:47.203444    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:44:47.221007    4717 logs.go:276] 0 containers: []
	W0828 10:44:47.221017    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:44:47.221069    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:44:47.237766    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:44:47.237790    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:44:47.237796    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:44:47.251525    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:44:47.251535    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:44:47.268647    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:44:47.268657    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:44:47.280499    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:44:47.280510    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:44:47.294771    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:44:47.294781    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:44:47.332459    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:44:47.332471    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:44:47.347255    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:44:47.347265    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:44:47.359135    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:44:47.359145    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:44:47.383463    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:44:47.383472    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:44:47.397388    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:44:47.397398    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:44:47.431611    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:44:47.431622    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:44:47.443527    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:44:47.443538    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:44:47.465412    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:44:47.465421    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:44:47.476915    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:44:47.476926    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:44:47.488670    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:44:47.488681    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:44:47.492934    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:44:47.492943    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:44:50.031804    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:44:55.033976    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:44:55.034206    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:44:55.054084    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:44:55.054175    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:44:55.068801    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:44:55.068883    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:44:55.080795    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:44:55.080859    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:44:55.092036    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:44:55.092116    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:44:55.102253    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:44:55.102321    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:44:55.112795    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:44:55.112867    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:44:55.122799    4717 logs.go:276] 0 containers: []
	W0828 10:44:55.122811    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:44:55.122867    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:44:55.132763    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:44:55.132778    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:44:55.132783    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:44:55.169411    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:44:55.169419    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:44:55.181233    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:44:55.181246    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:44:55.218693    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:44:55.218703    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:44:55.232151    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:44:55.232161    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:44:55.245321    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:44:55.245332    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:44:55.260283    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:44:55.260294    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:44:55.279399    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:44:55.279410    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:44:55.292387    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:44:55.292398    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:44:55.306899    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:44:55.306915    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:44:55.321040    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:44:55.321054    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:44:55.325207    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:44:55.325215    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:44:55.362770    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:44:55.362781    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:44:55.374092    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:44:55.374106    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:44:55.385601    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:44:55.385612    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:44:55.396620    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:44:55.396629    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:44:57.921016    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:02.923332    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:45:02.923579    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:45:02.949815    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:45:02.949944    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:45:02.968460    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:45:02.968537    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:45:02.981866    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:45:02.981933    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:45:02.993288    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:45:02.993363    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:45:03.003596    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:45:03.003660    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:45:03.018174    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:45:03.018249    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:45:03.028044    4717 logs.go:276] 0 containers: []
	W0828 10:45:03.028053    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:45:03.028105    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:45:03.040002    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:45:03.040019    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:45:03.040024    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:45:03.074824    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:45:03.074835    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:45:03.086116    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:45:03.086129    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:45:03.098016    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:45:03.098028    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:45:03.109821    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:45:03.109833    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:45:03.121566    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:45:03.121580    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:45:03.157696    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:45:03.157704    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:45:03.161490    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:45:03.161498    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:45:03.202929    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:45:03.202940    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:45:03.214422    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:45:03.214436    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:45:03.232498    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:45:03.232509    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:45:03.248506    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:45:03.248518    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:45:03.267289    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:45:03.267300    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:45:03.279125    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:45:03.279135    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:45:03.293194    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:45:03.293205    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:45:03.310123    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:45:03.310136    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:45:05.833908    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:10.834926    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:45:10.835245    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:45:10.869860    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:45:10.869971    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:45:10.889909    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:45:10.890001    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:45:10.905778    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:45:10.905860    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:45:10.921992    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:45:10.922067    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:45:10.935484    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:45:10.935558    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:45:10.947433    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:45:10.947511    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:45:10.962683    4717 logs.go:276] 0 containers: []
	W0828 10:45:10.962694    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:45:10.962752    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:45:10.977195    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:45:10.977213    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:45:10.977220    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:45:10.995419    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:45:10.995431    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:45:11.012113    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:45:11.012130    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:45:11.031148    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:45:11.031164    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:45:11.072055    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:45:11.072076    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:45:11.113006    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:45:11.113020    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:45:11.132892    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:45:11.132901    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:45:11.147627    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:45:11.147638    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:45:11.170908    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:45:11.170917    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:45:11.175800    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:45:11.175811    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:45:11.191067    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:45:11.191084    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:45:11.231337    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:45:11.231350    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:45:11.250169    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:45:11.250182    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:45:11.263073    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:45:11.263082    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:45:11.275295    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:45:11.275306    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:45:11.289580    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:45:11.289597    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:45:13.805012    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:18.807664    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:45:18.807970    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:45:18.838763    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:45:18.838865    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:45:18.855914    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:45:18.856004    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:45:18.869796    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:45:18.869868    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:45:18.881082    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:45:18.881162    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:45:18.891502    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:45:18.891571    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:45:18.902118    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:45:18.902187    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:45:18.912487    4717 logs.go:276] 0 containers: []
	W0828 10:45:18.912500    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:45:18.912556    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:45:18.935839    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:45:18.935858    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:45:18.935863    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:45:18.965119    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:45:18.965134    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:45:19.008715    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:45:19.008728    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:45:19.020765    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:45:19.020775    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:45:19.042530    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:45:19.042537    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:45:19.053858    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:45:19.053875    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:45:19.093072    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:45:19.093081    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:45:19.130851    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:45:19.130862    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:45:19.145271    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:45:19.145281    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:45:19.156844    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:45:19.156856    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:45:19.171445    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:45:19.171455    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:45:19.183090    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:45:19.183099    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:45:19.200089    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:45:19.200100    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:45:19.212327    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:45:19.212339    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:45:19.227565    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:45:19.227576    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:45:19.232346    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:45:19.232353    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:45:21.746945    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:26.749071    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:45:26.749195    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:45:26.768218    4717 logs.go:276] 2 containers: [ff5ec9bcdbc0 f04951a7c514]
	I0828 10:45:26.768301    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:45:26.779788    4717 logs.go:276] 2 containers: [cadaebeab74a 57615586b5d3]
	I0828 10:45:26.779876    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:45:26.790783    4717 logs.go:276] 1 containers: [b8c085ebafff]
	I0828 10:45:26.790873    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:45:26.801570    4717 logs.go:276] 2 containers: [5c6a8a7a0f54 37d0386da62f]
	I0828 10:45:26.801652    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:45:26.812226    4717 logs.go:276] 1 containers: [3e1f28aa6731]
	I0828 10:45:26.812306    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:45:26.823186    4717 logs.go:276] 2 containers: [c969ea54be9d d8ab8c596fcc]
	I0828 10:45:26.823265    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:45:26.834190    4717 logs.go:276] 0 containers: []
	W0828 10:45:26.834201    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:45:26.834256    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:45:26.845330    4717 logs.go:276] 1 containers: [207d13dc73e9]
	I0828 10:45:26.845346    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:45:26.845352    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:45:26.892931    4717 logs.go:123] Gathering logs for kube-apiserver [ff5ec9bcdbc0] ...
	I0828 10:45:26.892943    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff5ec9bcdbc0"
	I0828 10:45:26.907552    4717 logs.go:123] Gathering logs for kube-apiserver [f04951a7c514] ...
	I0828 10:45:26.907563    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04951a7c514"
	I0828 10:45:26.946502    4717 logs.go:123] Gathering logs for coredns [b8c085ebafff] ...
	I0828 10:45:26.946515    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c085ebafff"
	I0828 10:45:26.958152    4717 logs.go:123] Gathering logs for kube-proxy [3e1f28aa6731] ...
	I0828 10:45:26.958163    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e1f28aa6731"
	I0828 10:45:26.974587    4717 logs.go:123] Gathering logs for kube-controller-manager [c969ea54be9d] ...
	I0828 10:45:26.974601    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c969ea54be9d"
	I0828 10:45:26.992262    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:45:26.992274    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:45:27.029480    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:45:27.029499    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:45:27.033814    4717 logs.go:123] Gathering logs for storage-provisioner [207d13dc73e9] ...
	I0828 10:45:27.033821    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207d13dc73e9"
	I0828 10:45:27.045276    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:45:27.045289    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:45:27.057705    4717 logs.go:123] Gathering logs for etcd [cadaebeab74a] ...
	I0828 10:45:27.057717    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cadaebeab74a"
	I0828 10:45:27.071998    4717 logs.go:123] Gathering logs for kube-controller-manager [d8ab8c596fcc] ...
	I0828 10:45:27.072012    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8ab8c596fcc"
	I0828 10:45:27.083770    4717 logs.go:123] Gathering logs for kube-scheduler [37d0386da62f] ...
	I0828 10:45:27.083782    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37d0386da62f"
	I0828 10:45:27.098650    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:45:27.098661    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:45:27.122234    4717 logs.go:123] Gathering logs for etcd [57615586b5d3] ...
	I0828 10:45:27.122241    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57615586b5d3"
	I0828 10:45:27.136508    4717 logs.go:123] Gathering logs for kube-scheduler [5c6a8a7a0f54] ...
	I0828 10:45:27.136519    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c6a8a7a0f54"
	I0828 10:45:29.650547    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:34.652947    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:45:34.653035    4717 kubeadm.go:597] duration metric: took 4m4.439352542s to restartPrimaryControlPlane
	W0828 10:45:34.653121    4717 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0828 10:45:34.653161    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0828 10:45:35.686519    4717 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.033377667s)
	I0828 10:45:35.686908    4717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 10:45:35.691868    4717 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 10:45:35.694712    4717 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 10:45:35.697426    4717 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 10:45:35.697431    4717 kubeadm.go:157] found existing configuration files:
	
	I0828 10:45:35.697452    4717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf
	I0828 10:45:35.699816    4717 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 10:45:35.699842    4717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 10:45:35.702376    4717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf
	I0828 10:45:35.704832    4717 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 10:45:35.704849    4717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 10:45:35.707517    4717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf
	I0828 10:45:35.710668    4717 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 10:45:35.710690    4717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 10:45:35.713625    4717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf
	I0828 10:45:35.716124    4717 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 10:45:35.716142    4717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 10:45:35.719300    4717 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 10:45:35.736782    4717 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0828 10:45:35.736856    4717 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 10:45:35.784917    4717 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 10:45:35.784977    4717 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 10:45:35.785034    4717 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0828 10:45:35.835732    4717 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 10:45:35.844324    4717 out.go:235]   - Generating certificates and keys ...
	I0828 10:45:35.844357    4717 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 10:45:35.844382    4717 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 10:45:35.844414    4717 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 10:45:35.844442    4717 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 10:45:35.844476    4717 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 10:45:35.844508    4717 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 10:45:35.844545    4717 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 10:45:35.844579    4717 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 10:45:35.844617    4717 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 10:45:35.844657    4717 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 10:45:35.844673    4717 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 10:45:35.844700    4717 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 10:45:35.913262    4717 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 10:45:36.046244    4717 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 10:45:36.165186    4717 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 10:45:36.315761    4717 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 10:45:36.344072    4717 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 10:45:36.344646    4717 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 10:45:36.344685    4717 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 10:45:36.426389    4717 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 10:45:36.430458    4717 out.go:235]   - Booting up control plane ...
	I0828 10:45:36.430509    4717 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 10:45:36.430552    4717 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 10:45:36.430591    4717 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 10:45:36.430657    4717 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 10:45:36.433764    4717 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0828 10:45:41.441408    4717 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.007390 seconds
	I0828 10:45:41.441559    4717 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0828 10:45:41.454485    4717 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0828 10:45:41.965394    4717 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0828 10:45:41.965750    4717 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-801000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0828 10:45:42.480050    4717 kubeadm.go:310] [bootstrap-token] Using token: lyjl5u.emnixh7qt156wk4r
	I0828 10:45:42.486742    4717 out.go:235]   - Configuring RBAC rules ...
	I0828 10:45:42.486880    4717 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0828 10:45:42.487007    4717 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0828 10:45:42.494246    4717 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0828 10:45:42.496443    4717 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0828 10:45:42.498498    4717 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0828 10:45:42.500512    4717 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0828 10:45:42.506611    4717 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0828 10:45:42.683912    4717 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0828 10:45:42.886466    4717 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0828 10:45:42.886981    4717 kubeadm.go:310] 
	I0828 10:45:42.887012    4717 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0828 10:45:42.887016    4717 kubeadm.go:310] 
	I0828 10:45:42.887105    4717 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0828 10:45:42.887109    4717 kubeadm.go:310] 
	I0828 10:45:42.887121    4717 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0828 10:45:42.887207    4717 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0828 10:45:42.887235    4717 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0828 10:45:42.887237    4717 kubeadm.go:310] 
	I0828 10:45:42.887270    4717 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0828 10:45:42.887275    4717 kubeadm.go:310] 
	I0828 10:45:42.887365    4717 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0828 10:45:42.887392    4717 kubeadm.go:310] 
	I0828 10:45:42.887467    4717 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0828 10:45:42.887506    4717 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0828 10:45:42.887554    4717 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0828 10:45:42.887558    4717 kubeadm.go:310] 
	I0828 10:45:42.887606    4717 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0828 10:45:42.887651    4717 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0828 10:45:42.887657    4717 kubeadm.go:310] 
	I0828 10:45:42.887756    4717 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lyjl5u.emnixh7qt156wk4r \
	I0828 10:45:42.887804    4717 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5b3c4c1f8e59fd4c25ce08db6b17ec7ac98ea4455ff93445c7a91221249d86a1 \
	I0828 10:45:42.887813    4717 kubeadm.go:310] 	--control-plane 
	I0828 10:45:42.887828    4717 kubeadm.go:310] 
	I0828 10:45:42.887874    4717 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0828 10:45:42.887877    4717 kubeadm.go:310] 
	I0828 10:45:42.887916    4717 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lyjl5u.emnixh7qt156wk4r \
	I0828 10:45:42.888005    4717 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5b3c4c1f8e59fd4c25ce08db6b17ec7ac98ea4455ff93445c7a91221249d86a1 
	I0828 10:45:42.888077    4717 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 10:45:42.888085    4717 cni.go:84] Creating CNI manager for ""
	I0828 10:45:42.888095    4717 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:45:42.892734    4717 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 10:45:42.899653    4717 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 10:45:42.902491    4717 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 10:45:42.907093    4717 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 10:45:42.907133    4717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 10:45:42.907215    4717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-801000 minikube.k8s.io/updated_at=2024_08_28T10_45_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=stopped-upgrade-801000 minikube.k8s.io/primary=true
	I0828 10:45:42.961622    4717 kubeadm.go:1113] duration metric: took 54.52425ms to wait for elevateKubeSystemPrivileges
	I0828 10:45:42.961666    4717 ops.go:34] apiserver oom_adj: -16
	I0828 10:45:42.961812    4717 kubeadm.go:394] duration metric: took 4m12.762148708s to StartCluster
	I0828 10:45:42.961825    4717 settings.go:142] acquiring lock: {Name:mk584f5f183a19e050e7184c0c9e70ea26430337 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:45:42.961909    4717 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:45:42.962325    4717 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/kubeconfig: {Name:mke8b729c65a2ae9e4d9042dc78e2127479f8609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:45:42.962545    4717 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:45:42.962551    4717 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 10:45:42.962588    4717 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-801000"
	I0828 10:45:42.962601    4717 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-801000"
	W0828 10:45:42.962607    4717 addons.go:243] addon storage-provisioner should already be in state true
	I0828 10:45:42.962618    4717 host.go:66] Checking if "stopped-upgrade-801000" exists ...
	I0828 10:45:42.962616    4717 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-801000"
	I0828 10:45:42.962634    4717 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-801000"
	I0828 10:45:42.962670    4717 config.go:182] Loaded profile config "stopped-upgrade-801000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0828 10:45:42.963568    4717 kapi.go:59] client config for stopped-upgrade-801000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/stopped-upgrade-801000/client.key", CAFile:"/Users/jenkins/minikube-integration/19529-1176/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106777eb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0828 10:45:42.963691    4717 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-801000"
	W0828 10:45:42.963695    4717 addons.go:243] addon default-storageclass should already be in state true
	I0828 10:45:42.963702    4717 host.go:66] Checking if "stopped-upgrade-801000" exists ...
	I0828 10:45:42.966569    4717 out.go:177] * Verifying Kubernetes components...
	I0828 10:45:42.966926    4717 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 10:45:42.970850    4717 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 10:45:42.970856    4717 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/stopped-upgrade-801000/id_rsa Username:docker}
	I0828 10:45:42.974555    4717 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 10:45:42.978621    4717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 10:45:42.982648    4717 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 10:45:42.982654    4717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 10:45:42.982660    4717 sshutil.go:53] new ssh client: &{IP:localhost Port:50471 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/stopped-upgrade-801000/id_rsa Username:docker}
	I0828 10:45:43.067568    4717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 10:45:43.073131    4717 api_server.go:52] waiting for apiserver process to appear ...
	I0828 10:45:43.073179    4717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 10:45:43.076999    4717 api_server.go:72] duration metric: took 114.444416ms to wait for apiserver process to appear ...
	I0828 10:45:43.077008    4717 api_server.go:88] waiting for apiserver healthz status ...
	I0828 10:45:43.077015    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:43.115897    4717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 10:45:43.128043    4717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 10:45:43.500126    4717 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0828 10:45:43.500139    4717 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0828 10:45:48.078951    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:45:48.079042    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:53.079217    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:45:53.079237    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:45:58.079467    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:45:58.079491    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:46:03.079732    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:46:03.079757    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:46:08.080482    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:46:08.080513    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:46:13.081132    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:46:13.081153    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0828 10:46:13.501487    4717 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0828 10:46:13.505735    4717 out.go:177] * Enabled addons: storage-provisioner
	I0828 10:46:13.515721    4717 addons.go:510] duration metric: took 30.554191209s for enable addons: enabled=[storage-provisioner]
	I0828 10:46:18.081282    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:46:18.081308    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:46:23.082188    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:46:23.082212    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:46:28.083367    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:46:28.083392    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:46:33.084907    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:46:33.084932    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:46:38.086435    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:46:38.086469    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:46:43.088516    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:46:43.088641    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:46:43.117295    4717 logs.go:276] 1 containers: [3cd2d68a0953]
	I0828 10:46:43.117374    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:46:43.132054    4717 logs.go:276] 1 containers: [7a6db4567cc4]
	I0828 10:46:43.132127    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:46:43.142963    4717 logs.go:276] 2 containers: [b1b4962c707c 1673d4b3ae51]
	I0828 10:46:43.143032    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:46:43.153725    4717 logs.go:276] 1 containers: [a49de4d0c2ca]
	I0828 10:46:43.153795    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:46:43.164056    4717 logs.go:276] 1 containers: [2b663fa89e75]
	I0828 10:46:43.164130    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:46:43.174957    4717 logs.go:276] 1 containers: [d5fea5bfd6e7]
	I0828 10:46:43.175024    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:46:43.185623    4717 logs.go:276] 0 containers: []
	W0828 10:46:43.185635    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:46:43.185686    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:46:43.196353    4717 logs.go:276] 1 containers: [0c2ae3ec392a]
	I0828 10:46:43.196371    4717 logs.go:123] Gathering logs for etcd [7a6db4567cc4] ...
	I0828 10:46:43.196376    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6db4567cc4"
	I0828 10:46:43.210077    4717 logs.go:123] Gathering logs for coredns [b1b4962c707c] ...
	I0828 10:46:43.210090    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b4962c707c"
	I0828 10:46:43.221106    4717 logs.go:123] Gathering logs for kube-scheduler [a49de4d0c2ca] ...
	I0828 10:46:43.221120    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a49de4d0c2ca"
	I0828 10:46:43.234992    4717 logs.go:123] Gathering logs for storage-provisioner [0c2ae3ec392a] ...
	I0828 10:46:43.235003    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2ae3ec392a"
	I0828 10:46:43.250131    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:46:43.250141    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:46:43.263383    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:46:43.263394    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:46:43.300154    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:46:43.300162    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:46:43.338097    4717 logs.go:123] Gathering logs for kube-apiserver [3cd2d68a0953] ...
	I0828 10:46:43.338109    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd2d68a0953"
	I0828 10:46:43.352853    4717 logs.go:123] Gathering logs for coredns [1673d4b3ae51] ...
	I0828 10:46:43.352866    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1673d4b3ae51"
	I0828 10:46:43.364294    4717 logs.go:123] Gathering logs for kube-proxy [2b663fa89e75] ...
	I0828 10:46:43.364305    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b663fa89e75"
	I0828 10:46:43.375979    4717 logs.go:123] Gathering logs for kube-controller-manager [d5fea5bfd6e7] ...
	I0828 10:46:43.375991    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5fea5bfd6e7"
	I0828 10:46:43.393667    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:46:43.393680    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:46:43.416756    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:46:43.416763    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:46:45.922768    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:46:50.923730    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:46:50.924186    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:46:50.964754    4717 logs.go:276] 1 containers: [3cd2d68a0953]
	I0828 10:46:50.964883    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:46:50.986525    4717 logs.go:276] 1 containers: [7a6db4567cc4]
	I0828 10:46:50.986626    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:46:51.002028    4717 logs.go:276] 2 containers: [b1b4962c707c 1673d4b3ae51]
	I0828 10:46:51.002107    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:46:51.016767    4717 logs.go:276] 1 containers: [a49de4d0c2ca]
	I0828 10:46:51.016833    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:46:51.027007    4717 logs.go:276] 1 containers: [2b663fa89e75]
	I0828 10:46:51.027068    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:46:51.037334    4717 logs.go:276] 1 containers: [d5fea5bfd6e7]
	I0828 10:46:51.037403    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:46:51.047258    4717 logs.go:276] 0 containers: []
	W0828 10:46:51.047270    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:46:51.047326    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:46:51.059326    4717 logs.go:276] 1 containers: [0c2ae3ec392a]
	I0828 10:46:51.059341    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:46:51.059347    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:46:51.070904    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:46:51.070914    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:46:51.105221    4717 logs.go:123] Gathering logs for coredns [1673d4b3ae51] ...
	I0828 10:46:51.105233    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1673d4b3ae51"
	I0828 10:46:51.116703    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:46:51.116716    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:46:51.141322    4717 logs.go:123] Gathering logs for etcd [7a6db4567cc4] ...
	I0828 10:46:51.141332    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6db4567cc4"
	I0828 10:46:51.156834    4717 logs.go:123] Gathering logs for coredns [b1b4962c707c] ...
	I0828 10:46:51.156846    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b4962c707c"
	I0828 10:46:51.168133    4717 logs.go:123] Gathering logs for kube-scheduler [a49de4d0c2ca] ...
	I0828 10:46:51.168143    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a49de4d0c2ca"
	I0828 10:46:51.183041    4717 logs.go:123] Gathering logs for kube-proxy [2b663fa89e75] ...
	I0828 10:46:51.183054    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b663fa89e75"
	I0828 10:46:51.194235    4717 logs.go:123] Gathering logs for kube-controller-manager [d5fea5bfd6e7] ...
	I0828 10:46:51.194247    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5fea5bfd6e7"
	I0828 10:46:51.211553    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:46:51.211563    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:46:51.248165    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:46:51.248173    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:46:51.252217    4717 logs.go:123] Gathering logs for kube-apiserver [3cd2d68a0953] ...
	I0828 10:46:51.252225    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd2d68a0953"
	I0828 10:46:51.270998    4717 logs.go:123] Gathering logs for storage-provisioner [0c2ae3ec392a] ...
	I0828 10:46:51.271007    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2ae3ec392a"
	I0828 10:46:53.784440    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:46:58.787160    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:46:58.787597    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:46:58.818287    4717 logs.go:276] 1 containers: [3cd2d68a0953]
	I0828 10:46:58.818416    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:46:58.836581    4717 logs.go:276] 1 containers: [7a6db4567cc4]
	I0828 10:46:58.836665    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:46:58.850824    4717 logs.go:276] 2 containers: [b1b4962c707c 1673d4b3ae51]
	I0828 10:46:58.850897    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:46:58.862791    4717 logs.go:276] 1 containers: [a49de4d0c2ca]
	I0828 10:46:58.862863    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:46:58.873233    4717 logs.go:276] 1 containers: [2b663fa89e75]
	I0828 10:46:58.873293    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:46:58.883680    4717 logs.go:276] 1 containers: [d5fea5bfd6e7]
	I0828 10:46:58.883739    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:46:58.893939    4717 logs.go:276] 0 containers: []
	W0828 10:46:58.893951    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:46:58.894009    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:46:58.905145    4717 logs.go:276] 1 containers: [0c2ae3ec392a]
	I0828 10:46:58.905159    4717 logs.go:123] Gathering logs for storage-provisioner [0c2ae3ec392a] ...
	I0828 10:46:58.905166    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2ae3ec392a"
	I0828 10:46:58.916371    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:46:58.916384    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:46:58.920652    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:46:58.920660    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:46:58.955473    4717 logs.go:123] Gathering logs for etcd [7a6db4567cc4] ...
	I0828 10:46:58.955488    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6db4567cc4"
	I0828 10:46:58.969607    4717 logs.go:123] Gathering logs for coredns [b1b4962c707c] ...
	I0828 10:46:58.969621    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b4962c707c"
	I0828 10:46:58.981224    4717 logs.go:123] Gathering logs for coredns [1673d4b3ae51] ...
	I0828 10:46:58.981235    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1673d4b3ae51"
	I0828 10:46:58.993437    4717 logs.go:123] Gathering logs for kube-scheduler [a49de4d0c2ca] ...
	I0828 10:46:58.993450    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a49de4d0c2ca"
	I0828 10:46:59.007794    4717 logs.go:123] Gathering logs for kube-controller-manager [d5fea5bfd6e7] ...
	I0828 10:46:59.007808    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5fea5bfd6e7"
	I0828 10:46:59.024776    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:46:59.024786    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:46:59.063120    4717 logs.go:123] Gathering logs for kube-apiserver [3cd2d68a0953] ...
	I0828 10:46:59.063129    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd2d68a0953"
	I0828 10:46:59.079719    4717 logs.go:123] Gathering logs for kube-proxy [2b663fa89e75] ...
	I0828 10:46:59.079733    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b663fa89e75"
	I0828 10:46:59.091421    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:46:59.091432    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:46:59.115063    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:46:59.115070    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:47:01.628462    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:47:06.630959    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:47:06.631461    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:47:06.670330    4717 logs.go:276] 1 containers: [3cd2d68a0953]
	I0828 10:47:06.670466    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:47:06.692965    4717 logs.go:276] 1 containers: [7a6db4567cc4]
	I0828 10:47:06.693085    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:47:06.708003    4717 logs.go:276] 2 containers: [b1b4962c707c 1673d4b3ae51]
	I0828 10:47:06.708081    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:47:06.720016    4717 logs.go:276] 1 containers: [a49de4d0c2ca]
	I0828 10:47:06.720087    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:47:06.731040    4717 logs.go:276] 1 containers: [2b663fa89e75]
	I0828 10:47:06.731115    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:47:06.747390    4717 logs.go:276] 1 containers: [d5fea5bfd6e7]
	I0828 10:47:06.747459    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:47:06.757289    4717 logs.go:276] 0 containers: []
	W0828 10:47:06.757301    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:47:06.757362    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:47:06.772843    4717 logs.go:276] 1 containers: [0c2ae3ec392a]
	I0828 10:47:06.772859    4717 logs.go:123] Gathering logs for kube-proxy [2b663fa89e75] ...
	I0828 10:47:06.772864    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b663fa89e75"
	I0828 10:47:06.784798    4717 logs.go:123] Gathering logs for kube-controller-manager [d5fea5bfd6e7] ...
	I0828 10:47:06.784810    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5fea5bfd6e7"
	I0828 10:47:06.802351    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:47:06.802361    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:47:06.838678    4717 logs.go:123] Gathering logs for etcd [7a6db4567cc4] ...
	I0828 10:47:06.838685    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6db4567cc4"
	I0828 10:47:06.853325    4717 logs.go:123] Gathering logs for coredns [1673d4b3ae51] ...
	I0828 10:47:06.853335    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1673d4b3ae51"
	I0828 10:47:06.865162    4717 logs.go:123] Gathering logs for kube-scheduler [a49de4d0c2ca] ...
	I0828 10:47:06.865176    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a49de4d0c2ca"
	I0828 10:47:06.880033    4717 logs.go:123] Gathering logs for storage-provisioner [0c2ae3ec392a] ...
	I0828 10:47:06.880043    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2ae3ec392a"
	I0828 10:47:06.891547    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:47:06.891559    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:47:06.914835    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:47:06.914844    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:47:06.925904    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:47:06.925912    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:47:06.930492    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:47:06.930501    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:47:06.964346    4717 logs.go:123] Gathering logs for kube-apiserver [3cd2d68a0953] ...
	I0828 10:47:06.964361    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd2d68a0953"
	I0828 10:47:06.978832    4717 logs.go:123] Gathering logs for coredns [b1b4962c707c] ...
	I0828 10:47:06.978845    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b4962c707c"
	I0828 10:47:09.492420    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:47:14.494108    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:47:14.494461    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:47:14.523248    4717 logs.go:276] 1 containers: [3cd2d68a0953]
	I0828 10:47:14.523372    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:47:14.541268    4717 logs.go:276] 1 containers: [7a6db4567cc4]
	I0828 10:47:14.541348    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:47:14.554503    4717 logs.go:276] 2 containers: [b1b4962c707c 1673d4b3ae51]
	I0828 10:47:14.554580    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:47:14.565986    4717 logs.go:276] 1 containers: [a49de4d0c2ca]
	I0828 10:47:14.566053    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:47:14.584674    4717 logs.go:276] 1 containers: [2b663fa89e75]
	I0828 10:47:14.584742    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:47:14.594760    4717 logs.go:276] 1 containers: [d5fea5bfd6e7]
	I0828 10:47:14.594823    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:47:14.605058    4717 logs.go:276] 0 containers: []
	W0828 10:47:14.605070    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:47:14.605124    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:47:14.615058    4717 logs.go:276] 1 containers: [0c2ae3ec392a]
	I0828 10:47:14.615072    4717 logs.go:123] Gathering logs for kube-apiserver [3cd2d68a0953] ...
	I0828 10:47:14.615078    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd2d68a0953"
	I0828 10:47:14.629242    4717 logs.go:123] Gathering logs for etcd [7a6db4567cc4] ...
	I0828 10:47:14.629254    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6db4567cc4"
	I0828 10:47:14.642712    4717 logs.go:123] Gathering logs for coredns [b1b4962c707c] ...
	I0828 10:47:14.642722    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b4962c707c"
	I0828 10:47:14.654218    4717 logs.go:123] Gathering logs for kube-scheduler [a49de4d0c2ca] ...
	I0828 10:47:14.654230    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a49de4d0c2ca"
	I0828 10:47:14.668733    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:47:14.668743    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:47:14.691881    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:47:14.691889    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:47:14.702958    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:47:14.702968    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:47:14.741243    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:47:14.741252    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:47:14.781478    4717 logs.go:123] Gathering logs for kube-proxy [2b663fa89e75] ...
	I0828 10:47:14.781491    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b663fa89e75"
	I0828 10:47:14.792940    4717 logs.go:123] Gathering logs for kube-controller-manager [d5fea5bfd6e7] ...
	I0828 10:47:14.792953    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5fea5bfd6e7"
	I0828 10:47:14.810096    4717 logs.go:123] Gathering logs for storage-provisioner [0c2ae3ec392a] ...
	I0828 10:47:14.810107    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2ae3ec392a"
	I0828 10:47:14.821902    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:47:14.821911    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:47:14.826417    4717 logs.go:123] Gathering logs for coredns [1673d4b3ae51] ...
	I0828 10:47:14.826424    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1673d4b3ae51"
	I0828 10:47:17.339762    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:47:22.341789    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:47:22.342198    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:47:22.380461    4717 logs.go:276] 1 containers: [3cd2d68a0953]
	I0828 10:47:22.380587    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:47:22.402170    4717 logs.go:276] 1 containers: [7a6db4567cc4]
	I0828 10:47:22.402290    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:47:22.417184    4717 logs.go:276] 2 containers: [b1b4962c707c 1673d4b3ae51]
	I0828 10:47:22.417269    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:47:22.433905    4717 logs.go:276] 1 containers: [a49de4d0c2ca]
	I0828 10:47:22.433975    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:47:22.444823    4717 logs.go:276] 1 containers: [2b663fa89e75]
	I0828 10:47:22.444891    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:47:22.455653    4717 logs.go:276] 1 containers: [d5fea5bfd6e7]
	I0828 10:47:22.455721    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:47:22.466118    4717 logs.go:276] 0 containers: []
	W0828 10:47:22.466131    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:47:22.466186    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:47:22.477355    4717 logs.go:276] 1 containers: [0c2ae3ec392a]
	I0828 10:47:22.477371    4717 logs.go:123] Gathering logs for kube-scheduler [a49de4d0c2ca] ...
	I0828 10:47:22.477377    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a49de4d0c2ca"
	I0828 10:47:22.493036    4717 logs.go:123] Gathering logs for kube-proxy [2b663fa89e75] ...
	I0828 10:47:22.493048    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b663fa89e75"
	I0828 10:47:22.505055    4717 logs.go:123] Gathering logs for kube-controller-manager [d5fea5bfd6e7] ...
	I0828 10:47:22.505067    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5fea5bfd6e7"
	I0828 10:47:22.522041    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:47:22.522052    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:47:22.534179    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:47:22.534190    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:47:22.538825    4717 logs.go:123] Gathering logs for kube-apiserver [3cd2d68a0953] ...
	I0828 10:47:22.538833    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd2d68a0953"
	I0828 10:47:22.553404    4717 logs.go:123] Gathering logs for etcd [7a6db4567cc4] ...
	I0828 10:47:22.553417    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6db4567cc4"
	I0828 10:47:22.570830    4717 logs.go:123] Gathering logs for coredns [b1b4962c707c] ...
	I0828 10:47:22.570843    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b4962c707c"
	I0828 10:47:22.582865    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:47:22.582877    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:47:22.606504    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:47:22.606526    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:47:22.642433    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:47:22.642444    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:47:22.678976    4717 logs.go:123] Gathering logs for coredns [1673d4b3ae51] ...
	I0828 10:47:22.678989    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1673d4b3ae51"
	I0828 10:47:22.691435    4717 logs.go:123] Gathering logs for storage-provisioner [0c2ae3ec392a] ...
	I0828 10:47:22.691448    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2ae3ec392a"
	I0828 10:47:25.205591    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:47:30.207877    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:47:30.208251    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:47:30.239182    4717 logs.go:276] 1 containers: [3cd2d68a0953]
	I0828 10:47:30.239311    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:47:30.259885    4717 logs.go:276] 1 containers: [7a6db4567cc4]
	I0828 10:47:30.259966    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:47:30.276593    4717 logs.go:276] 2 containers: [b1b4962c707c 1673d4b3ae51]
	I0828 10:47:30.276664    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:47:30.288358    4717 logs.go:276] 1 containers: [a49de4d0c2ca]
	I0828 10:47:30.288425    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:47:30.298578    4717 logs.go:276] 1 containers: [2b663fa89e75]
	I0828 10:47:30.298642    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:47:30.309288    4717 logs.go:276] 1 containers: [d5fea5bfd6e7]
	I0828 10:47:30.309359    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:47:30.319368    4717 logs.go:276] 0 containers: []
	W0828 10:47:30.319378    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:47:30.319429    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:47:30.329710    4717 logs.go:276] 1 containers: [0c2ae3ec392a]
	I0828 10:47:30.329726    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:47:30.329730    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:47:30.368219    4717 logs.go:123] Gathering logs for etcd [7a6db4567cc4] ...
	I0828 10:47:30.368229    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6db4567cc4"
	I0828 10:47:30.382130    4717 logs.go:123] Gathering logs for coredns [1673d4b3ae51] ...
	I0828 10:47:30.382142    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1673d4b3ae51"
	I0828 10:47:30.397209    4717 logs.go:123] Gathering logs for kube-scheduler [a49de4d0c2ca] ...
	I0828 10:47:30.397220    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a49de4d0c2ca"
	I0828 10:47:30.411486    4717 logs.go:123] Gathering logs for storage-provisioner [0c2ae3ec392a] ...
	I0828 10:47:30.411496    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2ae3ec392a"
	I0828 10:47:30.425920    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:47:30.425932    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:47:30.455244    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:47:30.455250    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:47:30.459545    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:47:30.459551    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:47:30.493868    4717 logs.go:123] Gathering logs for kube-apiserver [3cd2d68a0953] ...
	I0828 10:47:30.493880    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd2d68a0953"
	I0828 10:47:30.507789    4717 logs.go:123] Gathering logs for coredns [b1b4962c707c] ...
	I0828 10:47:30.507802    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b4962c707c"
	I0828 10:47:30.519138    4717 logs.go:123] Gathering logs for kube-proxy [2b663fa89e75] ...
	I0828 10:47:30.519148    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b663fa89e75"
	I0828 10:47:30.531224    4717 logs.go:123] Gathering logs for kube-controller-manager [d5fea5bfd6e7] ...
	I0828 10:47:30.531236    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5fea5bfd6e7"
	I0828 10:47:30.551445    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:47:30.551458    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:47:33.064597    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:47:38.066892    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:47:38.067093    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:47:38.087300    4717 logs.go:276] 1 containers: [3cd2d68a0953]
	I0828 10:47:38.087388    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:47:38.101444    4717 logs.go:276] 1 containers: [7a6db4567cc4]
	I0828 10:47:38.101524    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:47:38.116370    4717 logs.go:276] 2 containers: [b1b4962c707c 1673d4b3ae51]
	I0828 10:47:38.116438    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:47:38.127187    4717 logs.go:276] 1 containers: [a49de4d0c2ca]
	I0828 10:47:38.127256    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:47:38.137485    4717 logs.go:276] 1 containers: [2b663fa89e75]
	I0828 10:47:38.137546    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:47:38.147712    4717 logs.go:276] 1 containers: [d5fea5bfd6e7]
	I0828 10:47:38.147780    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:47:38.157919    4717 logs.go:276] 0 containers: []
	W0828 10:47:38.157929    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:47:38.157983    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:47:38.168327    4717 logs.go:276] 1 containers: [0c2ae3ec392a]
	I0828 10:47:38.168343    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:47:38.168349    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:47:38.206047    4717 logs.go:123] Gathering logs for kube-apiserver [3cd2d68a0953] ...
	I0828 10:47:38.206055    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd2d68a0953"
	I0828 10:47:38.220914    4717 logs.go:123] Gathering logs for etcd [7a6db4567cc4] ...
	I0828 10:47:38.220927    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6db4567cc4"
	I0828 10:47:38.234302    4717 logs.go:123] Gathering logs for coredns [1673d4b3ae51] ...
	I0828 10:47:38.234316    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1673d4b3ae51"
	I0828 10:47:38.245954    4717 logs.go:123] Gathering logs for storage-provisioner [0c2ae3ec392a] ...
	I0828 10:47:38.245967    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2ae3ec392a"
	I0828 10:47:38.257390    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:47:38.257403    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:47:38.268921    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:47:38.268935    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:47:38.273233    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:47:38.273242    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:47:38.308829    4717 logs.go:123] Gathering logs for coredns [b1b4962c707c] ...
	I0828 10:47:38.308844    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b4962c707c"
	I0828 10:47:38.321693    4717 logs.go:123] Gathering logs for kube-scheduler [a49de4d0c2ca] ...
	I0828 10:47:38.321708    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a49de4d0c2ca"
	I0828 10:47:38.340226    4717 logs.go:123] Gathering logs for kube-proxy [2b663fa89e75] ...
	I0828 10:47:38.340238    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b663fa89e75"
	I0828 10:47:38.360591    4717 logs.go:123] Gathering logs for kube-controller-manager [d5fea5bfd6e7] ...
	I0828 10:47:38.360604    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5fea5bfd6e7"
	I0828 10:47:38.377255    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:47:38.377266    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:47:40.904369    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:47:45.906546    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:47:45.906782    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:47:45.936032    4717 logs.go:276] 1 containers: [3cd2d68a0953]
	I0828 10:47:45.936139    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:47:45.954192    4717 logs.go:276] 1 containers: [7a6db4567cc4]
	I0828 10:47:45.954271    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:47:45.967529    4717 logs.go:276] 4 containers: [67133e03a04c 2a868a349cbf b1b4962c707c 1673d4b3ae51]
	I0828 10:47:45.967602    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:47:45.978855    4717 logs.go:276] 1 containers: [a49de4d0c2ca]
	I0828 10:47:45.978933    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:47:45.989172    4717 logs.go:276] 1 containers: [2b663fa89e75]
	I0828 10:47:45.989243    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:47:45.999871    4717 logs.go:276] 1 containers: [d5fea5bfd6e7]
	I0828 10:47:45.999939    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:47:46.010095    4717 logs.go:276] 0 containers: []
	W0828 10:47:46.010107    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:47:46.010162    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:47:46.020400    4717 logs.go:276] 1 containers: [0c2ae3ec392a]
	I0828 10:47:46.020419    4717 logs.go:123] Gathering logs for coredns [2a868a349cbf] ...
	I0828 10:47:46.020424    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a868a349cbf"
	I0828 10:47:46.031127    4717 logs.go:123] Gathering logs for kube-scheduler [a49de4d0c2ca] ...
	I0828 10:47:46.031140    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a49de4d0c2ca"
	I0828 10:47:46.045652    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:47:46.045663    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:47:46.066200    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:47:46.066213    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:47:46.105349    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:47:46.105359    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:47:46.109482    4717 logs.go:123] Gathering logs for etcd [7a6db4567cc4] ...
	I0828 10:47:46.109491    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6db4567cc4"
	I0828 10:47:46.123408    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:47:46.123418    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:47:46.147201    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:47:46.147209    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:47:46.182330    4717 logs.go:123] Gathering logs for coredns [67133e03a04c] ...
	I0828 10:47:46.182340    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67133e03a04c"
	I0828 10:47:46.193681    4717 logs.go:123] Gathering logs for coredns [b1b4962c707c] ...
	I0828 10:47:46.193694    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b4962c707c"
	I0828 10:47:46.205406    4717 logs.go:123] Gathering logs for kube-proxy [2b663fa89e75] ...
	I0828 10:47:46.205415    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b663fa89e75"
	I0828 10:47:46.217523    4717 logs.go:123] Gathering logs for kube-controller-manager [d5fea5bfd6e7] ...
	I0828 10:47:46.217535    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5fea5bfd6e7"
	I0828 10:47:46.235541    4717 logs.go:123] Gathering logs for storage-provisioner [0c2ae3ec392a] ...
	I0828 10:47:46.235553    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2ae3ec392a"
	I0828 10:47:46.246858    4717 logs.go:123] Gathering logs for kube-apiserver [3cd2d68a0953] ...
	I0828 10:47:46.246871    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd2d68a0953"
	I0828 10:47:46.268071    4717 logs.go:123] Gathering logs for coredns [1673d4b3ae51] ...
	I0828 10:47:46.268084    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1673d4b3ae51"
	I0828 10:47:48.779871    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:47:53.780965    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:47:53.781472    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:47:53.821352    4717 logs.go:276] 1 containers: [3cd2d68a0953]
	I0828 10:47:53.821482    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:47:53.843521    4717 logs.go:276] 1 containers: [7a6db4567cc4]
	I0828 10:47:53.843635    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:47:53.858481    4717 logs.go:276] 4 containers: [67133e03a04c 2a868a349cbf b1b4962c707c 1673d4b3ae51]
	I0828 10:47:53.858564    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:47:53.872233    4717 logs.go:276] 1 containers: [a49de4d0c2ca]
	I0828 10:47:53.872301    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:47:53.888048    4717 logs.go:276] 1 containers: [2b663fa89e75]
	I0828 10:47:53.888119    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:47:53.898329    4717 logs.go:276] 1 containers: [d5fea5bfd6e7]
	I0828 10:47:53.898388    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:47:53.908905    4717 logs.go:276] 0 containers: []
	W0828 10:47:53.908918    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:47:53.908972    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:47:53.919705    4717 logs.go:276] 1 containers: [0c2ae3ec392a]
	I0828 10:47:53.919721    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:47:53.919727    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:47:53.923918    4717 logs.go:123] Gathering logs for coredns [2a868a349cbf] ...
	I0828 10:47:53.923927    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a868a349cbf"
	I0828 10:47:53.934974    4717 logs.go:123] Gathering logs for coredns [1673d4b3ae51] ...
	I0828 10:47:53.934986    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1673d4b3ae51"
	I0828 10:47:53.946927    4717 logs.go:123] Gathering logs for storage-provisioner [0c2ae3ec392a] ...
	I0828 10:47:53.946940    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2ae3ec392a"
	I0828 10:47:53.962611    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:47:53.962621    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:47:53.988514    4717 logs.go:123] Gathering logs for etcd [7a6db4567cc4] ...
	I0828 10:47:53.988528    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6db4567cc4"
	I0828 10:47:54.002402    4717 logs.go:123] Gathering logs for coredns [b1b4962c707c] ...
	I0828 10:47:54.002413    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b4962c707c"
	I0828 10:47:54.014034    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:47:54.014047    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:47:54.025384    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:47:54.025398    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:47:54.062636    4717 logs.go:123] Gathering logs for kube-apiserver [3cd2d68a0953] ...
	I0828 10:47:54.062649    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd2d68a0953"
	I0828 10:47:54.077107    4717 logs.go:123] Gathering logs for coredns [67133e03a04c] ...
	I0828 10:47:54.077118    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67133e03a04c"
	I0828 10:47:54.087952    4717 logs.go:123] Gathering logs for kube-controller-manager [d5fea5bfd6e7] ...
	I0828 10:47:54.087961    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5fea5bfd6e7"
	I0828 10:47:54.106855    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:47:54.106869    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:47:54.142700    4717 logs.go:123] Gathering logs for kube-scheduler [a49de4d0c2ca] ...
	I0828 10:47:54.142709    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a49de4d0c2ca"
	I0828 10:47:54.157315    4717 logs.go:123] Gathering logs for kube-proxy [2b663fa89e75] ...
	I0828 10:47:54.157327    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b663fa89e75"
	I0828 10:47:56.671346    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:48:01.674121    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:48:01.674574    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:48:01.713515    4717 logs.go:276] 1 containers: [3cd2d68a0953]
	I0828 10:48:01.713650    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:48:01.733795    4717 logs.go:276] 1 containers: [7a6db4567cc4]
	I0828 10:48:01.733890    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:48:01.749218    4717 logs.go:276] 4 containers: [67133e03a04c 2a868a349cbf b1b4962c707c 1673d4b3ae51]
	I0828 10:48:01.749302    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:48:01.761917    4717 logs.go:276] 1 containers: [a49de4d0c2ca]
	I0828 10:48:01.761986    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:48:01.772363    4717 logs.go:276] 1 containers: [2b663fa89e75]
	I0828 10:48:01.772422    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:48:01.782549    4717 logs.go:276] 1 containers: [d5fea5bfd6e7]
	I0828 10:48:01.782614    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:48:01.792573    4717 logs.go:276] 0 containers: []
	W0828 10:48:01.792588    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:48:01.792645    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:48:01.805840    4717 logs.go:276] 1 containers: [0c2ae3ec392a]
	I0828 10:48:01.805858    4717 logs.go:123] Gathering logs for etcd [7a6db4567cc4] ...
	I0828 10:48:01.805864    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6db4567cc4"
	I0828 10:48:01.820057    4717 logs.go:123] Gathering logs for kube-scheduler [a49de4d0c2ca] ...
	I0828 10:48:01.820070    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a49de4d0c2ca"
	I0828 10:48:01.835091    4717 logs.go:123] Gathering logs for kube-controller-manager [d5fea5bfd6e7] ...
	I0828 10:48:01.835104    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5fea5bfd6e7"
	I0828 10:48:01.852885    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:48:01.852898    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:48:01.876750    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:48:01.876757    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:48:01.914039    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:48:01.914050    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:48:01.919240    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:48:01.919253    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:48:01.958514    4717 logs.go:123] Gathering logs for storage-provisioner [0c2ae3ec392a] ...
	I0828 10:48:01.958529    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2ae3ec392a"
	I0828 10:48:01.973624    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:48:01.973640    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:48:01.986672    4717 logs.go:123] Gathering logs for coredns [2a868a349cbf] ...
	I0828 10:48:01.986685    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a868a349cbf"
	I0828 10:48:01.999996    4717 logs.go:123] Gathering logs for coredns [b1b4962c707c] ...
	I0828 10:48:02.000013    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b4962c707c"
	I0828 10:48:02.013910    4717 logs.go:123] Gathering logs for kube-proxy [2b663fa89e75] ...
	I0828 10:48:02.013921    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b663fa89e75"
	I0828 10:48:02.027977    4717 logs.go:123] Gathering logs for kube-apiserver [3cd2d68a0953] ...
	I0828 10:48:02.027992    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd2d68a0953"
	I0828 10:48:02.044599    4717 logs.go:123] Gathering logs for coredns [67133e03a04c] ...
	I0828 10:48:02.044644    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67133e03a04c"
	I0828 10:48:02.059710    4717 logs.go:123] Gathering logs for coredns [1673d4b3ae51] ...
	I0828 10:48:02.059724    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1673d4b3ae51"
	I0828 10:48:04.574318    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:48:09.574432    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:48:09.574823    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:48:09.615383    4717 logs.go:276] 1 containers: [3cd2d68a0953]
	I0828 10:48:09.615515    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:48:09.643900    4717 logs.go:276] 1 containers: [7a6db4567cc4]
	I0828 10:48:09.643986    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:48:09.658279    4717 logs.go:276] 4 containers: [67133e03a04c 2a868a349cbf b1b4962c707c 1673d4b3ae51]
	I0828 10:48:09.658358    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:48:09.669952    4717 logs.go:276] 1 containers: [a49de4d0c2ca]
	I0828 10:48:09.670015    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:48:09.680673    4717 logs.go:276] 1 containers: [2b663fa89e75]
	I0828 10:48:09.680734    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:48:09.691359    4717 logs.go:276] 1 containers: [d5fea5bfd6e7]
	I0828 10:48:09.691423    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:48:09.702353    4717 logs.go:276] 0 containers: []
	W0828 10:48:09.702364    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:48:09.702420    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:48:09.713123    4717 logs.go:276] 1 containers: [0c2ae3ec392a]
	I0828 10:48:09.713150    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:48:09.713156    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:48:09.717845    4717 logs.go:123] Gathering logs for kube-proxy [2b663fa89e75] ...
	I0828 10:48:09.717855    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b663fa89e75"
	I0828 10:48:09.729674    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:48:09.729686    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:48:09.743700    4717 logs.go:123] Gathering logs for coredns [67133e03a04c] ...
	I0828 10:48:09.743712    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67133e03a04c"
	I0828 10:48:09.760387    4717 logs.go:123] Gathering logs for coredns [2a868a349cbf] ...
	I0828 10:48:09.760400    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a868a349cbf"
	I0828 10:48:09.771768    4717 logs.go:123] Gathering logs for kube-scheduler [a49de4d0c2ca] ...
	I0828 10:48:09.771781    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a49de4d0c2ca"
	I0828 10:48:09.787278    4717 logs.go:123] Gathering logs for kube-controller-manager [d5fea5bfd6e7] ...
	I0828 10:48:09.787290    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5fea5bfd6e7"
	I0828 10:48:09.804699    4717 logs.go:123] Gathering logs for storage-provisioner [0c2ae3ec392a] ...
	I0828 10:48:09.804712    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2ae3ec392a"
	I0828 10:48:09.824713    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:48:09.824726    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:48:09.862490    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:48:09.862522    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:48:09.896835    4717 logs.go:123] Gathering logs for kube-apiserver [3cd2d68a0953] ...
	I0828 10:48:09.896849    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd2d68a0953"
	I0828 10:48:09.911202    4717 logs.go:123] Gathering logs for coredns [1673d4b3ae51] ...
	I0828 10:48:09.911212    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1673d4b3ae51"
	I0828 10:48:09.922769    4717 logs.go:123] Gathering logs for etcd [7a6db4567cc4] ...
	I0828 10:48:09.922779    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6db4567cc4"
	I0828 10:48:09.936607    4717 logs.go:123] Gathering logs for coredns [b1b4962c707c] ...
	I0828 10:48:09.936618    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b4962c707c"
	I0828 10:48:09.949140    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:48:09.949152    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:48:12.475247    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:48:17.475936    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:48:17.476042    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:48:17.487711    4717 logs.go:276] 1 containers: [3cd2d68a0953]
	I0828 10:48:17.487766    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:48:17.503559    4717 logs.go:276] 1 containers: [7a6db4567cc4]
	I0828 10:48:17.503620    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:48:17.515512    4717 logs.go:276] 4 containers: [67133e03a04c 2a868a349cbf b1b4962c707c 1673d4b3ae51]
	I0828 10:48:17.515585    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:48:17.527356    4717 logs.go:276] 1 containers: [a49de4d0c2ca]
	I0828 10:48:17.527401    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:48:17.538094    4717 logs.go:276] 1 containers: [2b663fa89e75]
	I0828 10:48:17.538149    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:48:17.549392    4717 logs.go:276] 1 containers: [d5fea5bfd6e7]
	I0828 10:48:17.549477    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:48:17.566690    4717 logs.go:276] 0 containers: []
	W0828 10:48:17.566703    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:48:17.566749    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:48:17.578287    4717 logs.go:276] 1 containers: [0c2ae3ec392a]
	I0828 10:48:17.578359    4717 logs.go:123] Gathering logs for kube-scheduler [a49de4d0c2ca] ...
	I0828 10:48:17.578373    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a49de4d0c2ca"
	I0828 10:48:17.594223    4717 logs.go:123] Gathering logs for kube-proxy [2b663fa89e75] ...
	I0828 10:48:17.594235    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b663fa89e75"
	I0828 10:48:17.606716    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:48:17.606727    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:48:17.645065    4717 logs.go:123] Gathering logs for coredns [2a868a349cbf] ...
	I0828 10:48:17.645080    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a868a349cbf"
	I0828 10:48:17.658409    4717 logs.go:123] Gathering logs for coredns [b1b4962c707c] ...
	I0828 10:48:17.658422    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b4962c707c"
	I0828 10:48:17.671635    4717 logs.go:123] Gathering logs for coredns [67133e03a04c] ...
	I0828 10:48:17.671646    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67133e03a04c"
	I0828 10:48:17.684183    4717 logs.go:123] Gathering logs for coredns [1673d4b3ae51] ...
	I0828 10:48:17.684198    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1673d4b3ae51"
	I0828 10:48:17.696708    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:48:17.696721    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:48:17.702078    4717 logs.go:123] Gathering logs for kube-apiserver [3cd2d68a0953] ...
	I0828 10:48:17.702087    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd2d68a0953"
	I0828 10:48:17.717497    4717 logs.go:123] Gathering logs for etcd [7a6db4567cc4] ...
	I0828 10:48:17.717508    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6db4567cc4"
	I0828 10:48:17.732610    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:48:17.732622    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:48:17.771076    4717 logs.go:123] Gathering logs for kube-controller-manager [d5fea5bfd6e7] ...
	I0828 10:48:17.771089    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5fea5bfd6e7"
	I0828 10:48:17.793492    4717 logs.go:123] Gathering logs for storage-provisioner [0c2ae3ec392a] ...
	I0828 10:48:17.793503    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2ae3ec392a"
	I0828 10:48:17.806472    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:48:17.806483    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:48:17.832355    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:48:17.832372    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:48:20.347317    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:48:25.349991    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:48:25.350451    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:48:25.389154    4717 logs.go:276] 1 containers: [3cd2d68a0953]
	I0828 10:48:25.389304    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:48:25.411074    4717 logs.go:276] 1 containers: [7a6db4567cc4]
	I0828 10:48:25.411178    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:48:25.426198    4717 logs.go:276] 4 containers: [67133e03a04c 2a868a349cbf b1b4962c707c 1673d4b3ae51]
	I0828 10:48:25.426281    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:48:25.439159    4717 logs.go:276] 1 containers: [a49de4d0c2ca]
	I0828 10:48:25.439231    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:48:25.449867    4717 logs.go:276] 1 containers: [2b663fa89e75]
	I0828 10:48:25.449926    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:48:25.460482    4717 logs.go:276] 1 containers: [d5fea5bfd6e7]
	I0828 10:48:25.460550    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:48:25.471297    4717 logs.go:276] 0 containers: []
	W0828 10:48:25.471309    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:48:25.471367    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:48:25.483010    4717 logs.go:276] 1 containers: [0c2ae3ec392a]
	I0828 10:48:25.483026    4717 logs.go:123] Gathering logs for kube-apiserver [3cd2d68a0953] ...
	I0828 10:48:25.483031    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd2d68a0953"
	I0828 10:48:25.497312    4717 logs.go:123] Gathering logs for coredns [67133e03a04c] ...
	I0828 10:48:25.497325    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67133e03a04c"
	I0828 10:48:25.508931    4717 logs.go:123] Gathering logs for coredns [1673d4b3ae51] ...
	I0828 10:48:25.508942    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1673d4b3ae51"
	I0828 10:48:25.520786    4717 logs.go:123] Gathering logs for kube-proxy [2b663fa89e75] ...
	I0828 10:48:25.520798    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b663fa89e75"
	I0828 10:48:25.532919    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:48:25.532932    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:48:25.544482    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:48:25.544496    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:48:25.580067    4717 logs.go:123] Gathering logs for coredns [2a868a349cbf] ...
	I0828 10:48:25.580075    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a868a349cbf"
	I0828 10:48:25.591277    4717 logs.go:123] Gathering logs for kube-controller-manager [d5fea5bfd6e7] ...
	I0828 10:48:25.591290    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5fea5bfd6e7"
	I0828 10:48:25.613211    4717 logs.go:123] Gathering logs for storage-provisioner [0c2ae3ec392a] ...
	I0828 10:48:25.613220    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2ae3ec392a"
	I0828 10:48:25.624455    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:48:25.624467    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:48:25.628813    4717 logs.go:123] Gathering logs for coredns [b1b4962c707c] ...
	I0828 10:48:25.628821    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b4962c707c"
	I0828 10:48:25.640268    4717 logs.go:123] Gathering logs for etcd [7a6db4567cc4] ...
	I0828 10:48:25.640281    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6db4567cc4"
	I0828 10:48:25.653816    4717 logs.go:123] Gathering logs for kube-scheduler [a49de4d0c2ca] ...
	I0828 10:48:25.653828    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a49de4d0c2ca"
	I0828 10:48:25.668511    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:48:25.668523    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:48:25.692098    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:48:25.692105    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:48:28.228277    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:48:33.230440    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:48:33.230645    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:48:33.245249    4717 logs.go:276] 1 containers: [3cd2d68a0953]
	I0828 10:48:33.245321    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:48:33.260243    4717 logs.go:276] 1 containers: [7a6db4567cc4]
	I0828 10:48:33.260317    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:48:33.270550    4717 logs.go:276] 4 containers: [67133e03a04c 2a868a349cbf b1b4962c707c 1673d4b3ae51]
	I0828 10:48:33.270610    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:48:33.280976    4717 logs.go:276] 1 containers: [a49de4d0c2ca]
	I0828 10:48:33.281046    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:48:33.291709    4717 logs.go:276] 1 containers: [2b663fa89e75]
	I0828 10:48:33.291790    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:48:33.302465    4717 logs.go:276] 1 containers: [d5fea5bfd6e7]
	I0828 10:48:33.302537    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:48:33.313229    4717 logs.go:276] 0 containers: []
	W0828 10:48:33.313240    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:48:33.313299    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:48:33.323994    4717 logs.go:276] 1 containers: [0c2ae3ec392a]
	I0828 10:48:33.324010    4717 logs.go:123] Gathering logs for kube-apiserver [3cd2d68a0953] ...
	I0828 10:48:33.324015    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd2d68a0953"
	I0828 10:48:33.339018    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:48:33.339031    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:48:33.375811    4717 logs.go:123] Gathering logs for etcd [7a6db4567cc4] ...
	I0828 10:48:33.375825    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6db4567cc4"
	I0828 10:48:33.395101    4717 logs.go:123] Gathering logs for coredns [b1b4962c707c] ...
	I0828 10:48:33.395114    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b4962c707c"
	I0828 10:48:33.406808    4717 logs.go:123] Gathering logs for coredns [1673d4b3ae51] ...
	I0828 10:48:33.406820    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1673d4b3ae51"
	I0828 10:48:33.418886    4717 logs.go:123] Gathering logs for kube-controller-manager [d5fea5bfd6e7] ...
	I0828 10:48:33.418897    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5fea5bfd6e7"
	I0828 10:48:33.442396    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:48:33.442409    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:48:33.467532    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:48:33.467540    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:48:33.505334    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:48:33.505340    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:48:33.509478    4717 logs.go:123] Gathering logs for coredns [2a868a349cbf] ...
	I0828 10:48:33.509485    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a868a349cbf"
	I0828 10:48:33.521087    4717 logs.go:123] Gathering logs for storage-provisioner [0c2ae3ec392a] ...
	I0828 10:48:33.521099    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2ae3ec392a"
	I0828 10:48:33.532340    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:48:33.532351    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:48:33.544392    4717 logs.go:123] Gathering logs for coredns [67133e03a04c] ...
	I0828 10:48:33.544405    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67133e03a04c"
	I0828 10:48:33.556306    4717 logs.go:123] Gathering logs for kube-scheduler [a49de4d0c2ca] ...
	I0828 10:48:33.556317    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a49de4d0c2ca"
	I0828 10:48:33.570503    4717 logs.go:123] Gathering logs for kube-proxy [2b663fa89e75] ...
	I0828 10:48:33.570511    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b663fa89e75"
	I0828 10:48:36.084446    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:48:41.087012    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:48:41.087082    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:48:41.098853    4717 logs.go:276] 1 containers: [3cd2d68a0953]
	I0828 10:48:41.098927    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:48:41.111949    4717 logs.go:276] 1 containers: [7a6db4567cc4]
	I0828 10:48:41.112019    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:48:41.123963    4717 logs.go:276] 4 containers: [67133e03a04c 2a868a349cbf b1b4962c707c 1673d4b3ae51]
	I0828 10:48:41.124054    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:48:41.136021    4717 logs.go:276] 1 containers: [a49de4d0c2ca]
	I0828 10:48:41.136096    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:48:41.147776    4717 logs.go:276] 1 containers: [2b663fa89e75]
	I0828 10:48:41.147847    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:48:41.159385    4717 logs.go:276] 1 containers: [d5fea5bfd6e7]
	I0828 10:48:41.159481    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:48:41.170677    4717 logs.go:276] 0 containers: []
	W0828 10:48:41.170688    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:48:41.170735    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:48:41.182837    4717 logs.go:276] 1 containers: [0c2ae3ec392a]
	I0828 10:48:41.182856    4717 logs.go:123] Gathering logs for kube-apiserver [3cd2d68a0953] ...
	I0828 10:48:41.182862    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd2d68a0953"
	I0828 10:48:41.199663    4717 logs.go:123] Gathering logs for etcd [7a6db4567cc4] ...
	I0828 10:48:41.199675    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6db4567cc4"
	I0828 10:48:41.215443    4717 logs.go:123] Gathering logs for kube-proxy [2b663fa89e75] ...
	I0828 10:48:41.215454    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b663fa89e75"
	I0828 10:48:41.229715    4717 logs.go:123] Gathering logs for kube-controller-manager [d5fea5bfd6e7] ...
	I0828 10:48:41.229727    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5fea5bfd6e7"
	I0828 10:48:41.248241    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:48:41.248256    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:48:41.275206    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:48:41.275221    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:48:41.279956    4717 logs.go:123] Gathering logs for coredns [67133e03a04c] ...
	I0828 10:48:41.279966    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67133e03a04c"
	I0828 10:48:41.296139    4717 logs.go:123] Gathering logs for coredns [1673d4b3ae51] ...
	I0828 10:48:41.296151    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1673d4b3ae51"
	I0828 10:48:41.308862    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:48:41.308875    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:48:41.324109    4717 logs.go:123] Gathering logs for coredns [2a868a349cbf] ...
	I0828 10:48:41.324122    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a868a349cbf"
	I0828 10:48:41.337814    4717 logs.go:123] Gathering logs for coredns [b1b4962c707c] ...
	I0828 10:48:41.337826    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b4962c707c"
	I0828 10:48:41.350576    4717 logs.go:123] Gathering logs for kube-scheduler [a49de4d0c2ca] ...
	I0828 10:48:41.350591    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a49de4d0c2ca"
	I0828 10:48:41.366358    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:48:41.366372    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:48:41.406916    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:48:41.406930    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:48:41.447443    4717 logs.go:123] Gathering logs for storage-provisioner [0c2ae3ec392a] ...
	I0828 10:48:41.447455    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2ae3ec392a"
	I0828 10:48:43.962973    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:48:48.965167    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:48:48.965518    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:48:49.002467    4717 logs.go:276] 1 containers: [3cd2d68a0953]
	I0828 10:48:49.002590    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:48:49.031662    4717 logs.go:276] 1 containers: [7a6db4567cc4]
	I0828 10:48:49.031773    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:48:49.057215    4717 logs.go:276] 4 containers: [67133e03a04c 2a868a349cbf b1b4962c707c 1673d4b3ae51]
	I0828 10:48:49.057287    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:48:49.067927    4717 logs.go:276] 1 containers: [a49de4d0c2ca]
	I0828 10:48:49.067987    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:48:49.078876    4717 logs.go:276] 1 containers: [2b663fa89e75]
	I0828 10:48:49.078939    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:48:49.089214    4717 logs.go:276] 1 containers: [d5fea5bfd6e7]
	I0828 10:48:49.089274    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:48:49.099368    4717 logs.go:276] 0 containers: []
	W0828 10:48:49.099380    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:48:49.099433    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:48:49.109630    4717 logs.go:276] 1 containers: [0c2ae3ec392a]
	I0828 10:48:49.109644    4717 logs.go:123] Gathering logs for kube-proxy [2b663fa89e75] ...
	I0828 10:48:49.109650    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b663fa89e75"
	I0828 10:48:49.121049    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:48:49.121063    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:48:49.146321    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:48:49.146329    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:48:49.157738    4717 logs.go:123] Gathering logs for kube-apiserver [3cd2d68a0953] ...
	I0828 10:48:49.157751    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd2d68a0953"
	I0828 10:48:49.180728    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:48:49.180741    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:48:49.185585    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:48:49.185593    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:48:49.219635    4717 logs.go:123] Gathering logs for etcd [7a6db4567cc4] ...
	I0828 10:48:49.219648    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6db4567cc4"
	I0828 10:48:49.234475    4717 logs.go:123] Gathering logs for coredns [67133e03a04c] ...
	I0828 10:48:49.234486    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67133e03a04c"
	I0828 10:48:49.246633    4717 logs.go:123] Gathering logs for coredns [2a868a349cbf] ...
	I0828 10:48:49.246642    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a868a349cbf"
	I0828 10:48:49.257974    4717 logs.go:123] Gathering logs for kube-controller-manager [d5fea5bfd6e7] ...
	I0828 10:48:49.257984    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5fea5bfd6e7"
	I0828 10:48:49.276670    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:48:49.276682    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:48:49.312642    4717 logs.go:123] Gathering logs for coredns [1673d4b3ae51] ...
	I0828 10:48:49.312650    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1673d4b3ae51"
	I0828 10:48:49.324382    4717 logs.go:123] Gathering logs for kube-scheduler [a49de4d0c2ca] ...
	I0828 10:48:49.324394    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a49de4d0c2ca"
	I0828 10:48:49.338670    4717 logs.go:123] Gathering logs for storage-provisioner [0c2ae3ec392a] ...
	I0828 10:48:49.338679    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2ae3ec392a"
	I0828 10:48:49.350198    4717 logs.go:123] Gathering logs for coredns [b1b4962c707c] ...
	I0828 10:48:49.350209    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b4962c707c"
	I0828 10:48:51.864160    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:48:56.866753    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:48:56.866960    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:48:56.890444    4717 logs.go:276] 1 containers: [3cd2d68a0953]
	I0828 10:48:56.890549    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:48:56.907589    4717 logs.go:276] 1 containers: [7a6db4567cc4]
	I0828 10:48:56.907669    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:48:56.920027    4717 logs.go:276] 4 containers: [67133e03a04c 2a868a349cbf b1b4962c707c 1673d4b3ae51]
	I0828 10:48:56.920103    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:48:56.931996    4717 logs.go:276] 1 containers: [a49de4d0c2ca]
	I0828 10:48:56.932055    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:48:56.947339    4717 logs.go:276] 1 containers: [2b663fa89e75]
	I0828 10:48:56.947406    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:48:56.958414    4717 logs.go:276] 1 containers: [d5fea5bfd6e7]
	I0828 10:48:56.958477    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:48:56.969171    4717 logs.go:276] 0 containers: []
	W0828 10:48:56.969188    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:48:56.969244    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:48:56.980232    4717 logs.go:276] 1 containers: [0c2ae3ec392a]
	I0828 10:48:56.980250    4717 logs.go:123] Gathering logs for kube-apiserver [3cd2d68a0953] ...
	I0828 10:48:56.980256    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd2d68a0953"
	I0828 10:48:56.995411    4717 logs.go:123] Gathering logs for etcd [7a6db4567cc4] ...
	I0828 10:48:56.995422    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6db4567cc4"
	I0828 10:48:57.014479    4717 logs.go:123] Gathering logs for storage-provisioner [0c2ae3ec392a] ...
	I0828 10:48:57.014489    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2ae3ec392a"
	I0828 10:48:57.026531    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:48:57.026542    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:48:57.038558    4717 logs.go:123] Gathering logs for coredns [67133e03a04c] ...
	I0828 10:48:57.038568    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67133e03a04c"
	I0828 10:48:57.051485    4717 logs.go:123] Gathering logs for coredns [b1b4962c707c] ...
	I0828 10:48:57.051497    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b4962c707c"
	I0828 10:48:57.064468    4717 logs.go:123] Gathering logs for coredns [1673d4b3ae51] ...
	I0828 10:48:57.064482    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1673d4b3ae51"
	I0828 10:48:57.076521    4717 logs.go:123] Gathering logs for kube-scheduler [a49de4d0c2ca] ...
	I0828 10:48:57.076534    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a49de4d0c2ca"
	I0828 10:48:57.094311    4717 logs.go:123] Gathering logs for kube-proxy [2b663fa89e75] ...
	I0828 10:48:57.094322    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b663fa89e75"
	I0828 10:48:57.106659    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:48:57.106672    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:48:57.141172    4717 logs.go:123] Gathering logs for coredns [2a868a349cbf] ...
	I0828 10:48:57.141181    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a868a349cbf"
	I0828 10:48:57.153595    4717 logs.go:123] Gathering logs for kube-controller-manager [d5fea5bfd6e7] ...
	I0828 10:48:57.153607    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5fea5bfd6e7"
	I0828 10:48:57.171791    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:48:57.171800    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:48:57.195354    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:48:57.195361    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:48:57.231419    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:48:57.231426    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:48:59.737476    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:49:04.740148    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:49:04.740521    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:49:04.778095    4717 logs.go:276] 1 containers: [3cd2d68a0953]
	I0828 10:49:04.778218    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:49:04.797920    4717 logs.go:276] 1 containers: [7a6db4567cc4]
	I0828 10:49:04.798012    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:49:04.812691    4717 logs.go:276] 4 containers: [67133e03a04c 2a868a349cbf b1b4962c707c 1673d4b3ae51]
	I0828 10:49:04.812762    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:49:04.827117    4717 logs.go:276] 1 containers: [a49de4d0c2ca]
	I0828 10:49:04.827187    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:49:04.838614    4717 logs.go:276] 1 containers: [2b663fa89e75]
	I0828 10:49:04.838678    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:49:04.850067    4717 logs.go:276] 1 containers: [d5fea5bfd6e7]
	I0828 10:49:04.850131    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:49:04.861098    4717 logs.go:276] 0 containers: []
	W0828 10:49:04.861109    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:49:04.861158    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:49:04.872114    4717 logs.go:276] 1 containers: [0c2ae3ec392a]
	I0828 10:49:04.872132    4717 logs.go:123] Gathering logs for coredns [2a868a349cbf] ...
	I0828 10:49:04.872138    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a868a349cbf"
	I0828 10:49:04.883879    4717 logs.go:123] Gathering logs for coredns [1673d4b3ae51] ...
	I0828 10:49:04.883892    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1673d4b3ae51"
	I0828 10:49:04.896145    4717 logs.go:123] Gathering logs for kube-scheduler [a49de4d0c2ca] ...
	I0828 10:49:04.896158    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a49de4d0c2ca"
	I0828 10:49:04.911307    4717 logs.go:123] Gathering logs for kube-proxy [2b663fa89e75] ...
	I0828 10:49:04.911320    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b663fa89e75"
	I0828 10:49:04.923651    4717 logs.go:123] Gathering logs for coredns [67133e03a04c] ...
	I0828 10:49:04.923664    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67133e03a04c"
	I0828 10:49:04.936241    4717 logs.go:123] Gathering logs for kube-controller-manager [d5fea5bfd6e7] ...
	I0828 10:49:04.936252    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5fea5bfd6e7"
	I0828 10:49:04.953942    4717 logs.go:123] Gathering logs for storage-provisioner [0c2ae3ec392a] ...
	I0828 10:49:04.953955    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2ae3ec392a"
	I0828 10:49:04.965842    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:49:04.965852    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:49:04.990577    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:49:04.990587    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:49:05.028443    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:49:05.028455    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:49:05.033017    4717 logs.go:123] Gathering logs for etcd [7a6db4567cc4] ...
	I0828 10:49:05.033026    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6db4567cc4"
	I0828 10:49:05.047621    4717 logs.go:123] Gathering logs for coredns [b1b4962c707c] ...
	I0828 10:49:05.047633    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b4962c707c"
	I0828 10:49:05.060060    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:49:05.060072    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:49:05.096549    4717 logs.go:123] Gathering logs for kube-apiserver [3cd2d68a0953] ...
	I0828 10:49:05.096562    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd2d68a0953"
	I0828 10:49:05.112570    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:49:05.112583    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:49:07.626396    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:49:12.628525    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:49:12.628966    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:49:12.669422    4717 logs.go:276] 1 containers: [3cd2d68a0953]
	I0828 10:49:12.669553    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:49:12.691438    4717 logs.go:276] 1 containers: [7a6db4567cc4]
	I0828 10:49:12.691552    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:49:12.707127    4717 logs.go:276] 4 containers: [67133e03a04c 2a868a349cbf b1b4962c707c 1673d4b3ae51]
	I0828 10:49:12.707201    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:49:12.720064    4717 logs.go:276] 1 containers: [a49de4d0c2ca]
	I0828 10:49:12.720136    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:49:12.731534    4717 logs.go:276] 1 containers: [2b663fa89e75]
	I0828 10:49:12.731597    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:49:12.743140    4717 logs.go:276] 1 containers: [d5fea5bfd6e7]
	I0828 10:49:12.743207    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:49:12.754187    4717 logs.go:276] 0 containers: []
	W0828 10:49:12.754197    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:49:12.754249    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:49:12.766959    4717 logs.go:276] 1 containers: [0c2ae3ec392a]
	I0828 10:49:12.766979    4717 logs.go:123] Gathering logs for coredns [b1b4962c707c] ...
	I0828 10:49:12.766985    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b4962c707c"
	I0828 10:49:12.779888    4717 logs.go:123] Gathering logs for kube-scheduler [a49de4d0c2ca] ...
	I0828 10:49:12.779901    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a49de4d0c2ca"
	I0828 10:49:12.795370    4717 logs.go:123] Gathering logs for storage-provisioner [0c2ae3ec392a] ...
	I0828 10:49:12.795384    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2ae3ec392a"
	I0828 10:49:12.808279    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:49:12.808292    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:49:12.831490    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:49:12.831497    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:49:12.867129    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:49:12.867135    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:49:12.906760    4717 logs.go:123] Gathering logs for etcd [7a6db4567cc4] ...
	I0828 10:49:12.906769    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6db4567cc4"
	I0828 10:49:12.921218    4717 logs.go:123] Gathering logs for kube-controller-manager [d5fea5bfd6e7] ...
	I0828 10:49:12.921227    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5fea5bfd6e7"
	I0828 10:49:12.939716    4717 logs.go:123] Gathering logs for coredns [1673d4b3ae51] ...
	I0828 10:49:12.939728    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1673d4b3ae51"
	I0828 10:49:12.952240    4717 logs.go:123] Gathering logs for kube-proxy [2b663fa89e75] ...
	I0828 10:49:12.952249    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b663fa89e75"
	I0828 10:49:12.964359    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:49:12.964373    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:49:12.976979    4717 logs.go:123] Gathering logs for coredns [2a868a349cbf] ...
	I0828 10:49:12.976992    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a868a349cbf"
	I0828 10:49:12.989297    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:49:12.989306    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:49:12.993905    4717 logs.go:123] Gathering logs for kube-apiserver [3cd2d68a0953] ...
	I0828 10:49:12.993911    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd2d68a0953"
	I0828 10:49:13.008959    4717 logs.go:123] Gathering logs for coredns [67133e03a04c] ...
	I0828 10:49:13.008969    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67133e03a04c"
	I0828 10:49:15.522979    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:49:20.525201    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:49:20.525577    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:49:20.567489    4717 logs.go:276] 1 containers: [3cd2d68a0953]
	I0828 10:49:20.567601    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:49:20.586383    4717 logs.go:276] 1 containers: [7a6db4567cc4]
	I0828 10:49:20.586465    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:49:20.599992    4717 logs.go:276] 4 containers: [67133e03a04c 2a868a349cbf b1b4962c707c 1673d4b3ae51]
	I0828 10:49:20.600062    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:49:20.612229    4717 logs.go:276] 1 containers: [a49de4d0c2ca]
	I0828 10:49:20.612289    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:49:20.623429    4717 logs.go:276] 1 containers: [2b663fa89e75]
	I0828 10:49:20.623494    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:49:20.634645    4717 logs.go:276] 1 containers: [d5fea5bfd6e7]
	I0828 10:49:20.634722    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:49:20.646752    4717 logs.go:276] 0 containers: []
	W0828 10:49:20.646762    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:49:20.646818    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:49:20.657824    4717 logs.go:276] 1 containers: [0c2ae3ec392a]
	I0828 10:49:20.657841    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:49:20.657847    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:49:20.695651    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:49:20.695658    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:49:20.699840    4717 logs.go:123] Gathering logs for coredns [b1b4962c707c] ...
	I0828 10:49:20.699846    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b4962c707c"
	I0828 10:49:20.715310    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:49:20.715322    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:49:20.771809    4717 logs.go:123] Gathering logs for kube-apiserver [3cd2d68a0953] ...
	I0828 10:49:20.771823    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd2d68a0953"
	I0828 10:49:20.793228    4717 logs.go:123] Gathering logs for etcd [7a6db4567cc4] ...
	I0828 10:49:20.793238    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6db4567cc4"
	I0828 10:49:20.807778    4717 logs.go:123] Gathering logs for kube-proxy [2b663fa89e75] ...
	I0828 10:49:20.807791    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b663fa89e75"
	I0828 10:49:20.819060    4717 logs.go:123] Gathering logs for kube-controller-manager [d5fea5bfd6e7] ...
	I0828 10:49:20.819072    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5fea5bfd6e7"
	I0828 10:49:20.836504    4717 logs.go:123] Gathering logs for storage-provisioner [0c2ae3ec392a] ...
	I0828 10:49:20.836517    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2ae3ec392a"
	I0828 10:49:20.847805    4717 logs.go:123] Gathering logs for coredns [67133e03a04c] ...
	I0828 10:49:20.847819    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67133e03a04c"
	I0828 10:49:20.859241    4717 logs.go:123] Gathering logs for coredns [2a868a349cbf] ...
	I0828 10:49:20.859254    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a868a349cbf"
	I0828 10:49:20.870717    4717 logs.go:123] Gathering logs for coredns [1673d4b3ae51] ...
	I0828 10:49:20.870732    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1673d4b3ae51"
	I0828 10:49:20.882445    4717 logs.go:123] Gathering logs for kube-scheduler [a49de4d0c2ca] ...
	I0828 10:49:20.882456    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a49de4d0c2ca"
	I0828 10:49:20.896593    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:49:20.896606    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:49:20.921331    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:49:20.921339    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:49:23.439692    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:49:28.442282    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:49:28.442638    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:49:28.475693    4717 logs.go:276] 1 containers: [3cd2d68a0953]
	I0828 10:49:28.475832    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:49:28.496217    4717 logs.go:276] 1 containers: [7a6db4567cc4]
	I0828 10:49:28.496319    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:49:28.510971    4717 logs.go:276] 4 containers: [67133e03a04c 2a868a349cbf b1b4962c707c 1673d4b3ae51]
	I0828 10:49:28.511045    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:49:28.523893    4717 logs.go:276] 1 containers: [a49de4d0c2ca]
	I0828 10:49:28.523950    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:49:28.534198    4717 logs.go:276] 1 containers: [2b663fa89e75]
	I0828 10:49:28.534258    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:49:28.544663    4717 logs.go:276] 1 containers: [d5fea5bfd6e7]
	I0828 10:49:28.544723    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:49:28.555326    4717 logs.go:276] 0 containers: []
	W0828 10:49:28.555337    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:49:28.555390    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:49:28.565706    4717 logs.go:276] 1 containers: [0c2ae3ec392a]
	I0828 10:49:28.565726    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:49:28.565731    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:49:28.599993    4717 logs.go:123] Gathering logs for kube-scheduler [a49de4d0c2ca] ...
	I0828 10:49:28.600005    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a49de4d0c2ca"
	I0828 10:49:28.614277    4717 logs.go:123] Gathering logs for etcd [7a6db4567cc4] ...
	I0828 10:49:28.614289    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6db4567cc4"
	I0828 10:49:28.628072    4717 logs.go:123] Gathering logs for coredns [67133e03a04c] ...
	I0828 10:49:28.628083    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67133e03a04c"
	I0828 10:49:28.639346    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:49:28.639357    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:49:28.675455    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:49:28.675465    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:49:28.679562    4717 logs.go:123] Gathering logs for kube-apiserver [3cd2d68a0953] ...
	I0828 10:49:28.679569    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd2d68a0953"
	I0828 10:49:28.693133    4717 logs.go:123] Gathering logs for kube-controller-manager [d5fea5bfd6e7] ...
	I0828 10:49:28.693144    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5fea5bfd6e7"
	I0828 10:49:28.710327    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:49:28.710338    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:49:28.734962    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:49:28.734972    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:49:28.746195    4717 logs.go:123] Gathering logs for coredns [2a868a349cbf] ...
	I0828 10:49:28.746208    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a868a349cbf"
	I0828 10:49:28.757340    4717 logs.go:123] Gathering logs for coredns [b1b4962c707c] ...
	I0828 10:49:28.757348    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1b4962c707c"
	I0828 10:49:28.771764    4717 logs.go:123] Gathering logs for coredns [1673d4b3ae51] ...
	I0828 10:49:28.771773    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1673d4b3ae51"
	I0828 10:49:28.783912    4717 logs.go:123] Gathering logs for kube-proxy [2b663fa89e75] ...
	I0828 10:49:28.783925    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b663fa89e75"
	I0828 10:49:28.795565    4717 logs.go:123] Gathering logs for storage-provisioner [0c2ae3ec392a] ...
	I0828 10:49:28.795575    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2ae3ec392a"
	I0828 10:49:31.308697    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:49:36.310835    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:49:36.311303    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0828 10:49:36.351227    4717 logs.go:276] 1 containers: [3cd2d68a0953]
	I0828 10:49:36.351349    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0828 10:49:36.373188    4717 logs.go:276] 1 containers: [7a6db4567cc4]
	I0828 10:49:36.373298    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0828 10:49:36.389011    4717 logs.go:276] 4 containers: [eeff882e02d7 1c940ea9d30b 67133e03a04c 2a868a349cbf]
	I0828 10:49:36.389089    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0828 10:49:36.401528    4717 logs.go:276] 1 containers: [a49de4d0c2ca]
	I0828 10:49:36.401597    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0828 10:49:36.412347    4717 logs.go:276] 1 containers: [2b663fa89e75]
	I0828 10:49:36.412407    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0828 10:49:36.426499    4717 logs.go:276] 1 containers: [d5fea5bfd6e7]
	I0828 10:49:36.426570    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0828 10:49:36.436605    4717 logs.go:276] 0 containers: []
	W0828 10:49:36.436614    4717 logs.go:278] No container was found matching "kindnet"
	I0828 10:49:36.436664    4717 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0828 10:49:36.447565    4717 logs.go:276] 1 containers: [0c2ae3ec392a]
	I0828 10:49:36.447584    4717 logs.go:123] Gathering logs for coredns [eeff882e02d7] ...
	I0828 10:49:36.447591    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeff882e02d7"
	I0828 10:49:36.458737    4717 logs.go:123] Gathering logs for coredns [1c940ea9d30b] ...
	I0828 10:49:36.458748    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c940ea9d30b"
	I0828 10:49:36.469879    4717 logs.go:123] Gathering logs for kube-scheduler [a49de4d0c2ca] ...
	I0828 10:49:36.469890    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a49de4d0c2ca"
	I0828 10:49:36.484733    4717 logs.go:123] Gathering logs for Docker ...
	I0828 10:49:36.484744    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0828 10:49:36.509222    4717 logs.go:123] Gathering logs for kubelet ...
	I0828 10:49:36.509230    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 10:49:36.546970    4717 logs.go:123] Gathering logs for kube-apiserver [3cd2d68a0953] ...
	I0828 10:49:36.546978    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cd2d68a0953"
	I0828 10:49:36.568975    4717 logs.go:123] Gathering logs for storage-provisioner [0c2ae3ec392a] ...
	I0828 10:49:36.568986    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2ae3ec392a"
	I0828 10:49:36.580801    4717 logs.go:123] Gathering logs for coredns [67133e03a04c] ...
	I0828 10:49:36.580812    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67133e03a04c"
	I0828 10:49:36.592539    4717 logs.go:123] Gathering logs for kube-proxy [2b663fa89e75] ...
	I0828 10:49:36.592552    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b663fa89e75"
	I0828 10:49:36.604593    4717 logs.go:123] Gathering logs for kube-controller-manager [d5fea5bfd6e7] ...
	I0828 10:49:36.604603    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5fea5bfd6e7"
	I0828 10:49:36.626547    4717 logs.go:123] Gathering logs for coredns [2a868a349cbf] ...
	I0828 10:49:36.626558    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a868a349cbf"
	I0828 10:49:36.638002    4717 logs.go:123] Gathering logs for container status ...
	I0828 10:49:36.638014    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 10:49:36.650601    4717 logs.go:123] Gathering logs for dmesg ...
	I0828 10:49:36.650615    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 10:49:36.655083    4717 logs.go:123] Gathering logs for describe nodes ...
	I0828 10:49:36.655093    4717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 10:49:36.690577    4717 logs.go:123] Gathering logs for etcd [7a6db4567cc4] ...
	I0828 10:49:36.690590    4717 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6db4567cc4"
	I0828 10:49:39.206840    4717 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0828 10:49:44.208981    4717 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0828 10:49:44.218970    4717 out.go:201] 
	W0828 10:49:44.222828    4717 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0828 10:49:44.222846    4717 out.go:270] * 
	* 
	W0828 10:49:44.224711    4717 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:49:44.238928    4717 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-801000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (577.49s)

                                                
                                    
x
+
TestPause/serial/Start (9.92s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-141000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
E0828 10:46:53.878582    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-141000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.855303042s)

                                                
                                                
-- stdout --
	* [pause-141000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-141000" primary control-plane node in "pause-141000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-141000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-141000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-141000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-141000 -n pause-141000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-141000 -n pause-141000: exit status 7 (59.227666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-141000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-188000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-188000 --driver=qemu2 : exit status 80 (9.78843925s)

                                                
                                                
-- stdout --
	* [NoKubernetes-188000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-188000" primary control-plane node in "NoKubernetes-188000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-188000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-188000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-188000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-188000 -n NoKubernetes-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-188000 -n NoKubernetes-188000: exit status 7 (55.449959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-188000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-188000 --no-kubernetes --driver=qemu2 : exit status 80 (5.243847792s)

                                                
                                                
-- stdout --
	* [NoKubernetes-188000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-188000
	* Restarting existing qemu2 VM for "NoKubernetes-188000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-188000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-188000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-188000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-188000 -n NoKubernetes-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-188000 -n NoKubernetes-188000: exit status 7 (58.7205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-188000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-188000 --no-kubernetes --driver=qemu2 : exit status 80 (5.248122208s)

                                                
                                                
-- stdout --
	* [NoKubernetes-188000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-188000
	* Restarting existing qemu2 VM for "NoKubernetes-188000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-188000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-188000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-188000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-188000 -n NoKubernetes-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-188000 -n NoKubernetes-188000: exit status 7 (60.275625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-188000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-188000 --driver=qemu2 : exit status 80 (5.259433917s)

                                                
                                                
-- stdout --
	* [NoKubernetes-188000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-188000
	* Restarting existing qemu2 VM for "NoKubernetes-188000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-188000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-188000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-188000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-188000 -n NoKubernetes-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-188000 -n NoKubernetes-188000: exit status 7 (35.921375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-160000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-160000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.758581959s)

                                                
                                                
-- stdout --
	* [auto-160000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-160000" primary control-plane node in "auto-160000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-160000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:47:54.505737    4925 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:47:54.505890    4925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:47:54.505894    4925 out.go:358] Setting ErrFile to fd 2...
	I0828 10:47:54.505896    4925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:47:54.506028    4925 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:47:54.507121    4925 out.go:352] Setting JSON to false
	I0828 10:47:54.523543    4925 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4638,"bootTime":1724862636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:47:54.523617    4925 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:47:54.529651    4925 out.go:177] * [auto-160000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:47:54.537618    4925 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:47:54.537681    4925 notify.go:220] Checking for updates...
	I0828 10:47:54.546519    4925 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:47:54.549605    4925 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:47:54.553456    4925 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:47:54.556584    4925 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:47:54.559578    4925 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:47:54.562928    4925 config.go:182] Loaded profile config "multinode-223000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:47:54.562998    4925 config.go:182] Loaded profile config "stopped-upgrade-801000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0828 10:47:54.563058    4925 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:47:54.567576    4925 out.go:177] * Using the qemu2 driver based on user configuration
	I0828 10:47:54.574586    4925 start.go:297] selected driver: qemu2
	I0828 10:47:54.574590    4925 start.go:901] validating driver "qemu2" against <nil>
	I0828 10:47:54.574596    4925 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:47:54.577038    4925 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 10:47:54.580565    4925 out.go:177] * Automatically selected the socket_vmnet network
	I0828 10:47:54.583689    4925 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:47:54.583743    4925 cni.go:84] Creating CNI manager for ""
	I0828 10:47:54.583753    4925 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:47:54.583757    4925 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 10:47:54.583782    4925 start.go:340] cluster config:
	{Name:auto-160000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:47:54.587429    4925 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:47:54.595570    4925 out.go:177] * Starting "auto-160000" primary control-plane node in "auto-160000" cluster
	I0828 10:47:54.599610    4925 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:47:54.599622    4925 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:47:54.599629    4925 cache.go:56] Caching tarball of preloaded images
	I0828 10:47:54.599679    4925 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:47:54.599683    4925 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:47:54.599746    4925 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/auto-160000/config.json ...
	I0828 10:47:54.599761    4925 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/auto-160000/config.json: {Name:mkab356705fc62390581278eabf2f61faa6991d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:47:54.600179    4925 start.go:360] acquireMachinesLock for auto-160000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:47:54.600217    4925 start.go:364] duration metric: took 32.291µs to acquireMachinesLock for "auto-160000"
	I0828 10:47:54.600229    4925 start.go:93] Provisioning new machine with config: &{Name:auto-160000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:47:54.600307    4925 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:47:54.608614    4925 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0828 10:47:54.624949    4925 start.go:159] libmachine.API.Create for "auto-160000" (driver="qemu2")
	I0828 10:47:54.624974    4925 client.go:168] LocalClient.Create starting
	I0828 10:47:54.625039    4925 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:47:54.625071    4925 main.go:141] libmachine: Decoding PEM data...
	I0828 10:47:54.625079    4925 main.go:141] libmachine: Parsing certificate...
	I0828 10:47:54.625112    4925 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:47:54.625135    4925 main.go:141] libmachine: Decoding PEM data...
	I0828 10:47:54.625143    4925 main.go:141] libmachine: Parsing certificate...
	I0828 10:47:54.625640    4925 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:47:54.786275    4925 main.go:141] libmachine: Creating SSH key...
	I0828 10:47:54.836859    4925 main.go:141] libmachine: Creating Disk image...
	I0828 10:47:54.836866    4925 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:47:54.837083    4925 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/auto-160000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/auto-160000/disk.qcow2
	I0828 10:47:54.846496    4925 main.go:141] libmachine: STDOUT: 
	I0828 10:47:54.846520    4925 main.go:141] libmachine: STDERR: 
	I0828 10:47:54.846566    4925 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/auto-160000/disk.qcow2 +20000M
	I0828 10:47:54.854617    4925 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:47:54.854632    4925 main.go:141] libmachine: STDERR: 
	I0828 10:47:54.854655    4925 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/auto-160000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/auto-160000/disk.qcow2
	I0828 10:47:54.854660    4925 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:47:54.854672    4925 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:47:54.854696    4925 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/auto-160000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/auto-160000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/auto-160000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:15:81:9f:9f:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/auto-160000/disk.qcow2
	I0828 10:47:54.856303    4925 main.go:141] libmachine: STDOUT: 
	I0828 10:47:54.856318    4925 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:47:54.856346    4925 client.go:171] duration metric: took 231.364709ms to LocalClient.Create
	I0828 10:47:56.857255    4925 start.go:128] duration metric: took 2.257003167s to createHost
	I0828 10:47:56.857313    4925 start.go:83] releasing machines lock for "auto-160000", held for 2.257166875s
	W0828 10:47:56.857360    4925 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:47:56.870350    4925 out.go:177] * Deleting "auto-160000" in qemu2 ...
	W0828 10:47:56.890629    4925 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:47:56.890640    4925 start.go:729] Will try again in 5 seconds ...
	I0828 10:48:01.892029    4925 start.go:360] acquireMachinesLock for auto-160000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:48:01.892142    4925 start.go:364] duration metric: took 96.625µs to acquireMachinesLock for "auto-160000"
	I0828 10:48:01.892156    4925 start.go:93] Provisioning new machine with config: &{Name:auto-160000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:48:01.892199    4925 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:48:01.900308    4925 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0828 10:48:01.916087    4925 start.go:159] libmachine.API.Create for "auto-160000" (driver="qemu2")
	I0828 10:48:01.916113    4925 client.go:168] LocalClient.Create starting
	I0828 10:48:01.916181    4925 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:48:01.916217    4925 main.go:141] libmachine: Decoding PEM data...
	I0828 10:48:01.916226    4925 main.go:141] libmachine: Parsing certificate...
	I0828 10:48:01.916262    4925 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:48:01.916289    4925 main.go:141] libmachine: Decoding PEM data...
	I0828 10:48:01.916295    4925 main.go:141] libmachine: Parsing certificate...
	I0828 10:48:01.916589    4925 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:48:02.077300    4925 main.go:141] libmachine: Creating SSH key...
	I0828 10:48:02.167236    4925 main.go:141] libmachine: Creating Disk image...
	I0828 10:48:02.167245    4925 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:48:02.167442    4925 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/auto-160000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/auto-160000/disk.qcow2
	I0828 10:48:02.177129    4925 main.go:141] libmachine: STDOUT: 
	I0828 10:48:02.177152    4925 main.go:141] libmachine: STDERR: 
	I0828 10:48:02.177204    4925 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/auto-160000/disk.qcow2 +20000M
	I0828 10:48:02.185125    4925 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:48:02.185137    4925 main.go:141] libmachine: STDERR: 
	I0828 10:48:02.185149    4925 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/auto-160000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/auto-160000/disk.qcow2
	I0828 10:48:02.185154    4925 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:48:02.185165    4925 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:48:02.185189    4925 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/auto-160000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/auto-160000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/auto-160000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:11:06:54:29:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/auto-160000/disk.qcow2
	I0828 10:48:02.186844    4925 main.go:141] libmachine: STDOUT: 
	I0828 10:48:02.186862    4925 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:48:02.186873    4925 client.go:171] duration metric: took 270.7665ms to LocalClient.Create
	I0828 10:48:04.189018    4925 start.go:128] duration metric: took 2.296862541s to createHost
	I0828 10:48:04.189118    4925 start.go:83] releasing machines lock for "auto-160000", held for 2.297042208s
	W0828 10:48:04.189451    4925 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-160000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-160000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:48:04.205218    4925 out.go:201] 
	W0828 10:48:04.209267    4925 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:48:04.209326    4925 out.go:270] * 
	* 
	W0828 10:48:04.211707    4925 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:48:04.221000    4925 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-160000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-160000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (10.2412895s)

                                                
                                                
-- stdout --
	* [calico-160000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-160000" primary control-plane node in "calico-160000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-160000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:48:06.374205    5034 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:48:06.374341    5034 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:48:06.374344    5034 out.go:358] Setting ErrFile to fd 2...
	I0828 10:48:06.374346    5034 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:48:06.374487    5034 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:48:06.375588    5034 out.go:352] Setting JSON to false
	I0828 10:48:06.391900    5034 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4650,"bootTime":1724862636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:48:06.391967    5034 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:48:06.398647    5034 out.go:177] * [calico-160000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:48:06.405508    5034 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:48:06.405542    5034 notify.go:220] Checking for updates...
	I0828 10:48:06.413461    5034 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:48:06.416472    5034 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:48:06.419445    5034 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:48:06.422539    5034 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:48:06.425526    5034 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:48:06.433750    5034 config.go:182] Loaded profile config "multinode-223000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:48:06.433819    5034 config.go:182] Loaded profile config "stopped-upgrade-801000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0828 10:48:06.433872    5034 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:48:06.437468    5034 out.go:177] * Using the qemu2 driver based on user configuration
	I0828 10:48:06.444495    5034 start.go:297] selected driver: qemu2
	I0828 10:48:06.444501    5034 start.go:901] validating driver "qemu2" against <nil>
	I0828 10:48:06.444506    5034 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:48:06.446622    5034 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 10:48:06.450511    5034 out.go:177] * Automatically selected the socket_vmnet network
	I0828 10:48:06.453522    5034 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:48:06.453572    5034 cni.go:84] Creating CNI manager for "calico"
	I0828 10:48:06.453578    5034 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0828 10:48:06.453619    5034 start.go:340] cluster config:
	{Name:calico-160000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:48:06.456903    5034 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:48:06.464378    5034 out.go:177] * Starting "calico-160000" primary control-plane node in "calico-160000" cluster
	I0828 10:48:06.468537    5034 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:48:06.468552    5034 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:48:06.468560    5034 cache.go:56] Caching tarball of preloaded images
	I0828 10:48:06.468623    5034 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:48:06.468628    5034 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:48:06.468701    5034 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/calico-160000/config.json ...
	I0828 10:48:06.468711    5034 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/calico-160000/config.json: {Name:mkc4f14478367a53032ab0b20bd2208f61bf45dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:48:06.469277    5034 start.go:360] acquireMachinesLock for calico-160000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:48:06.469307    5034 start.go:364] duration metric: took 24.417µs to acquireMachinesLock for "calico-160000"
	I0828 10:48:06.469317    5034 start.go:93] Provisioning new machine with config: &{Name:calico-160000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:48:06.469348    5034 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:48:06.473524    5034 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0828 10:48:06.488663    5034 start.go:159] libmachine.API.Create for "calico-160000" (driver="qemu2")
	I0828 10:48:06.488693    5034 client.go:168] LocalClient.Create starting
	I0828 10:48:06.488768    5034 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:48:06.488799    5034 main.go:141] libmachine: Decoding PEM data...
	I0828 10:48:06.488809    5034 main.go:141] libmachine: Parsing certificate...
	I0828 10:48:06.488849    5034 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:48:06.488874    5034 main.go:141] libmachine: Decoding PEM data...
	I0828 10:48:06.488882    5034 main.go:141] libmachine: Parsing certificate...
	I0828 10:48:06.489220    5034 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:48:06.650628    5034 main.go:141] libmachine: Creating SSH key...
	I0828 10:48:06.786732    5034 main.go:141] libmachine: Creating Disk image...
	I0828 10:48:06.786741    5034 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:48:06.786957    5034 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/calico-160000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/calico-160000/disk.qcow2
	I0828 10:48:06.796805    5034 main.go:141] libmachine: STDOUT: 
	I0828 10:48:06.796828    5034 main.go:141] libmachine: STDERR: 
	I0828 10:48:06.796884    5034 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/calico-160000/disk.qcow2 +20000M
	I0828 10:48:06.805322    5034 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:48:06.805338    5034 main.go:141] libmachine: STDERR: 
	I0828 10:48:06.805360    5034 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/calico-160000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/calico-160000/disk.qcow2
	I0828 10:48:06.805364    5034 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:48:06.805379    5034 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:48:06.805405    5034 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/calico-160000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/calico-160000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/calico-160000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:cc:35:db:52:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/calico-160000/disk.qcow2
	I0828 10:48:06.807042    5034 main.go:141] libmachine: STDOUT: 
	I0828 10:48:06.807063    5034 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:48:06.807080    5034 client.go:171] duration metric: took 318.392666ms to LocalClient.Create
	I0828 10:48:08.809208    5034 start.go:128] duration metric: took 2.33991125s to createHost
	I0828 10:48:08.809290    5034 start.go:83] releasing machines lock for "calico-160000", held for 2.340054875s
	W0828 10:48:08.809353    5034 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:48:08.824428    5034 out.go:177] * Deleting "calico-160000" in qemu2 ...
	W0828 10:48:08.851415    5034 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:48:08.851449    5034 start.go:729] Will try again in 5 seconds ...
	I0828 10:48:13.852156    5034 start.go:360] acquireMachinesLock for calico-160000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:48:13.852726    5034 start.go:364] duration metric: took 432.791µs to acquireMachinesLock for "calico-160000"
	I0828 10:48:13.852799    5034 start.go:93] Provisioning new machine with config: &{Name:calico-160000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:48:13.853088    5034 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:48:13.863503    5034 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0828 10:48:13.907908    5034 start.go:159] libmachine.API.Create for "calico-160000" (driver="qemu2")
	I0828 10:48:13.907966    5034 client.go:168] LocalClient.Create starting
	I0828 10:48:13.908103    5034 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:48:13.908183    5034 main.go:141] libmachine: Decoding PEM data...
	I0828 10:48:13.908201    5034 main.go:141] libmachine: Parsing certificate...
	I0828 10:48:13.908268    5034 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:48:13.908313    5034 main.go:141] libmachine: Decoding PEM data...
	I0828 10:48:13.908331    5034 main.go:141] libmachine: Parsing certificate...
	I0828 10:48:13.909005    5034 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:48:14.077054    5034 main.go:141] libmachine: Creating SSH key...
	I0828 10:48:14.521360    5034 main.go:141] libmachine: Creating Disk image...
	I0828 10:48:14.521376    5034 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:48:14.521590    5034 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/calico-160000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/calico-160000/disk.qcow2
	I0828 10:48:14.531576    5034 main.go:141] libmachine: STDOUT: 
	I0828 10:48:14.531600    5034 main.go:141] libmachine: STDERR: 
	I0828 10:48:14.531670    5034 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/calico-160000/disk.qcow2 +20000M
	I0828 10:48:14.540224    5034 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:48:14.540241    5034 main.go:141] libmachine: STDERR: 
	I0828 10:48:14.540255    5034 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/calico-160000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/calico-160000/disk.qcow2
	I0828 10:48:14.540262    5034 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:48:14.540271    5034 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:48:14.540311    5034 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/calico-160000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/calico-160000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/calico-160000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:28:32:c1:07:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/calico-160000/disk.qcow2
	I0828 10:48:14.542073    5034 main.go:141] libmachine: STDOUT: 
	I0828 10:48:14.542090    5034 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:48:14.542105    5034 client.go:171] duration metric: took 634.152959ms to LocalClient.Create
	I0828 10:48:16.544297    5034 start.go:128] duration metric: took 2.691261417s to createHost
	I0828 10:48:16.544405    5034 start.go:83] releasing machines lock for "calico-160000", held for 2.691748s
	W0828 10:48:16.544862    5034 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-160000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-160000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:48:16.554483    5034 out.go:201] 
	W0828 10:48:16.559767    5034 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:48:16.559792    5034 out.go:270] * 
	* 
	W0828 10:48:16.562434    5034 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:48:16.572513    5034 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-160000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-160000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.878145166s)

                                                
                                                
-- stdout --
	* [custom-flannel-160000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-160000" primary control-plane node in "custom-flannel-160000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-160000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:48:18.952033    5151 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:48:18.952150    5151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:48:18.952152    5151 out.go:358] Setting ErrFile to fd 2...
	I0828 10:48:18.952155    5151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:48:18.952259    5151 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:48:18.953329    5151 out.go:352] Setting JSON to false
	I0828 10:48:18.969881    5151 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4662,"bootTime":1724862636,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:48:18.969958    5151 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:48:18.976202    5151 out.go:177] * [custom-flannel-160000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:48:18.985016    5151 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:48:18.985066    5151 notify.go:220] Checking for updates...
	I0828 10:48:18.990547    5151 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:48:18.993906    5151 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:48:18.997965    5151 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:48:18.999450    5151 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:48:19.002950    5151 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:48:19.006253    5151 config.go:182] Loaded profile config "multinode-223000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:48:19.006330    5151 config.go:182] Loaded profile config "stopped-upgrade-801000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0828 10:48:19.006377    5151 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:48:19.009738    5151 out.go:177] * Using the qemu2 driver based on user configuration
	I0828 10:48:19.016938    5151 start.go:297] selected driver: qemu2
	I0828 10:48:19.016945    5151 start.go:901] validating driver "qemu2" against <nil>
	I0828 10:48:19.016961    5151 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:48:19.019313    5151 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 10:48:19.022794    5151 out.go:177] * Automatically selected the socket_vmnet network
	I0828 10:48:19.026077    5151 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:48:19.026101    5151 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0828 10:48:19.026118    5151 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0828 10:48:19.026143    5151 start.go:340] cluster config:
	{Name:custom-flannel-160000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:48:19.029931    5151 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:48:19.037946    5151 out.go:177] * Starting "custom-flannel-160000" primary control-plane node in "custom-flannel-160000" cluster
	I0828 10:48:19.041944    5151 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:48:19.041961    5151 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:48:19.041971    5151 cache.go:56] Caching tarball of preloaded images
	I0828 10:48:19.042033    5151 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:48:19.042039    5151 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:48:19.042108    5151 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/custom-flannel-160000/config.json ...
	I0828 10:48:19.042119    5151 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/custom-flannel-160000/config.json: {Name:mkdbd7692c516d374e04b5188a2b1e09aaa46a15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:48:19.042525    5151 start.go:360] acquireMachinesLock for custom-flannel-160000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:48:19.042560    5151 start.go:364] duration metric: took 25.5µs to acquireMachinesLock for "custom-flannel-160000"
	I0828 10:48:19.042571    5151 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-160000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:48:19.042604    5151 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:48:19.049989    5151 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0828 10:48:19.067097    5151 start.go:159] libmachine.API.Create for "custom-flannel-160000" (driver="qemu2")
	I0828 10:48:19.067136    5151 client.go:168] LocalClient.Create starting
	I0828 10:48:19.067207    5151 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:48:19.067241    5151 main.go:141] libmachine: Decoding PEM data...
	I0828 10:48:19.067249    5151 main.go:141] libmachine: Parsing certificate...
	I0828 10:48:19.067293    5151 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:48:19.067315    5151 main.go:141] libmachine: Decoding PEM data...
	I0828 10:48:19.067323    5151 main.go:141] libmachine: Parsing certificate...
	I0828 10:48:19.067762    5151 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:48:19.231035    5151 main.go:141] libmachine: Creating SSH key...
	I0828 10:48:19.323737    5151 main.go:141] libmachine: Creating Disk image...
	I0828 10:48:19.323743    5151 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:48:19.323939    5151 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/custom-flannel-160000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/custom-flannel-160000/disk.qcow2
	I0828 10:48:19.333342    5151 main.go:141] libmachine: STDOUT: 
	I0828 10:48:19.333362    5151 main.go:141] libmachine: STDERR: 
	I0828 10:48:19.333413    5151 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/custom-flannel-160000/disk.qcow2 +20000M
	I0828 10:48:19.341550    5151 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:48:19.341570    5151 main.go:141] libmachine: STDERR: 
	I0828 10:48:19.341585    5151 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/custom-flannel-160000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/custom-flannel-160000/disk.qcow2
	I0828 10:48:19.341590    5151 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:48:19.341601    5151 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:48:19.341633    5151 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/custom-flannel-160000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/custom-flannel-160000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/custom-flannel-160000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:4b:17:12:c3:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/custom-flannel-160000/disk.qcow2
	I0828 10:48:19.343244    5151 main.go:141] libmachine: STDOUT: 
	I0828 10:48:19.343258    5151 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:48:19.343278    5151 client.go:171] duration metric: took 276.147ms to LocalClient.Create
	I0828 10:48:21.345434    5151 start.go:128] duration metric: took 2.302872334s to createHost
	I0828 10:48:21.345496    5151 start.go:83] releasing machines lock for "custom-flannel-160000", held for 2.303003709s
	W0828 10:48:21.345611    5151 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:48:21.360060    5151 out.go:177] * Deleting "custom-flannel-160000" in qemu2 ...
	W0828 10:48:21.393399    5151 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:48:21.393429    5151 start.go:729] Will try again in 5 seconds ...
	I0828 10:48:26.395454    5151 start.go:360] acquireMachinesLock for custom-flannel-160000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:48:26.396064    5151 start.go:364] duration metric: took 501.458µs to acquireMachinesLock for "custom-flannel-160000"
	I0828 10:48:26.396174    5151 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-160000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:48:26.396556    5151 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:48:26.409938    5151 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0828 10:48:26.459001    5151 start.go:159] libmachine.API.Create for "custom-flannel-160000" (driver="qemu2")
	I0828 10:48:26.459074    5151 client.go:168] LocalClient.Create starting
	I0828 10:48:26.459246    5151 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:48:26.459316    5151 main.go:141] libmachine: Decoding PEM data...
	I0828 10:48:26.459335    5151 main.go:141] libmachine: Parsing certificate...
	I0828 10:48:26.459402    5151 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:48:26.459447    5151 main.go:141] libmachine: Decoding PEM data...
	I0828 10:48:26.459459    5151 main.go:141] libmachine: Parsing certificate...
	I0828 10:48:26.460045    5151 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:48:26.632471    5151 main.go:141] libmachine: Creating SSH key...
	I0828 10:48:26.730901    5151 main.go:141] libmachine: Creating Disk image...
	I0828 10:48:26.730914    5151 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:48:26.731084    5151 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/custom-flannel-160000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/custom-flannel-160000/disk.qcow2
	I0828 10:48:26.740561    5151 main.go:141] libmachine: STDOUT: 
	I0828 10:48:26.740583    5151 main.go:141] libmachine: STDERR: 
	I0828 10:48:26.740634    5151 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/custom-flannel-160000/disk.qcow2 +20000M
	I0828 10:48:26.748872    5151 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:48:26.748887    5151 main.go:141] libmachine: STDERR: 
	I0828 10:48:26.748899    5151 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/custom-flannel-160000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/custom-flannel-160000/disk.qcow2
	I0828 10:48:26.748903    5151 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:48:26.748915    5151 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:48:26.748950    5151 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/custom-flannel-160000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/custom-flannel-160000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/custom-flannel-160000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:1e:a7:c1:db:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/custom-flannel-160000/disk.qcow2
	I0828 10:48:26.750572    5151 main.go:141] libmachine: STDOUT: 
	I0828 10:48:26.750589    5151 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:48:26.750603    5151 client.go:171] duration metric: took 291.531834ms to LocalClient.Create
	I0828 10:48:28.752750    5151 start.go:128] duration metric: took 2.3562255s to createHost
	I0828 10:48:28.752840    5151 start.go:83] releasing machines lock for "custom-flannel-160000", held for 2.356816667s
	W0828 10:48:28.753333    5151 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-160000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-160000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:48:28.769001    5151 out.go:201] 
	W0828 10:48:28.772972    5151 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:48:28.773009    5151 out.go:270] * 
	* 
	W0828 10:48:28.775972    5151 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:48:28.791034    5151 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-160000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-160000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.903744125s)

                                                
                                                
-- stdout --
	* [false-160000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-160000" primary control-plane node in "false-160000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-160000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:48:31.174091    5271 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:48:31.174239    5271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:48:31.174243    5271 out.go:358] Setting ErrFile to fd 2...
	I0828 10:48:31.174245    5271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:48:31.174371    5271 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:48:31.175458    5271 out.go:352] Setting JSON to false
	I0828 10:48:31.191966    5271 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4675,"bootTime":1724862636,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:48:31.192033    5271 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:48:31.198672    5271 out.go:177] * [false-160000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:48:31.206808    5271 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:48:31.206822    5271 notify.go:220] Checking for updates...
	I0828 10:48:31.212735    5271 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:48:31.215834    5271 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:48:31.219685    5271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:48:31.222791    5271 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:48:31.225753    5271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:48:31.229145    5271 config.go:182] Loaded profile config "multinode-223000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:48:31.229210    5271 config.go:182] Loaded profile config "stopped-upgrade-801000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0828 10:48:31.229252    5271 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:48:31.233787    5271 out.go:177] * Using the qemu2 driver based on user configuration
	I0828 10:48:31.240749    5271 start.go:297] selected driver: qemu2
	I0828 10:48:31.240755    5271 start.go:901] validating driver "qemu2" against <nil>
	I0828 10:48:31.240761    5271 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:48:31.243003    5271 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 10:48:31.246724    5271 out.go:177] * Automatically selected the socket_vmnet network
	I0828 10:48:31.249869    5271 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:48:31.249900    5271 cni.go:84] Creating CNI manager for "false"
	I0828 10:48:31.249928    5271 start.go:340] cluster config:
	{Name:false-160000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:false-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:48:31.253531    5271 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:48:31.261750    5271 out.go:177] * Starting "false-160000" primary control-plane node in "false-160000" cluster
	I0828 10:48:31.264681    5271 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:48:31.264697    5271 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:48:31.264709    5271 cache.go:56] Caching tarball of preloaded images
	I0828 10:48:31.264765    5271 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:48:31.264771    5271 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:48:31.264857    5271 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/false-160000/config.json ...
	I0828 10:48:31.264868    5271 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/false-160000/config.json: {Name:mka51c40c24997dc73efff32860f139f0cebf74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:48:31.265259    5271 start.go:360] acquireMachinesLock for false-160000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:48:31.265298    5271 start.go:364] duration metric: took 31.458µs to acquireMachinesLock for "false-160000"
	I0828 10:48:31.265313    5271 start.go:93] Provisioning new machine with config: &{Name:false-160000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:48:31.265340    5271 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:48:31.272667    5271 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0828 10:48:31.289585    5271 start.go:159] libmachine.API.Create for "false-160000" (driver="qemu2")
	I0828 10:48:31.289610    5271 client.go:168] LocalClient.Create starting
	I0828 10:48:31.289688    5271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:48:31.289726    5271 main.go:141] libmachine: Decoding PEM data...
	I0828 10:48:31.289736    5271 main.go:141] libmachine: Parsing certificate...
	I0828 10:48:31.289783    5271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:48:31.289809    5271 main.go:141] libmachine: Decoding PEM data...
	I0828 10:48:31.289822    5271 main.go:141] libmachine: Parsing certificate...
	I0828 10:48:31.290185    5271 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:48:31.451329    5271 main.go:141] libmachine: Creating SSH key...
	I0828 10:48:31.546346    5271 main.go:141] libmachine: Creating Disk image...
	I0828 10:48:31.546351    5271 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:48:31.546532    5271 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/false-160000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/false-160000/disk.qcow2
	I0828 10:48:31.555779    5271 main.go:141] libmachine: STDOUT: 
	I0828 10:48:31.555796    5271 main.go:141] libmachine: STDERR: 
	I0828 10:48:31.555842    5271 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/false-160000/disk.qcow2 +20000M
	I0828 10:48:31.563920    5271 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:48:31.563936    5271 main.go:141] libmachine: STDERR: 
	I0828 10:48:31.563958    5271 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/false-160000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/false-160000/disk.qcow2
	I0828 10:48:31.563967    5271 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:48:31.563979    5271 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:48:31.564003    5271 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/false-160000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/false-160000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/false-160000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:a6:4e:f8:a5:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/false-160000/disk.qcow2
	I0828 10:48:31.565679    5271 main.go:141] libmachine: STDOUT: 
	I0828 10:48:31.565695    5271 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:48:31.565714    5271 client.go:171] duration metric: took 276.108458ms to LocalClient.Create
	I0828 10:48:33.567406    5271 start.go:128] duration metric: took 2.3021375s to createHost
	I0828 10:48:33.567418    5271 start.go:83] releasing machines lock for "false-160000", held for 2.302193875s
	W0828 10:48:33.567445    5271 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:48:33.577864    5271 out.go:177] * Deleting "false-160000" in qemu2 ...
	W0828 10:48:33.588360    5271 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:48:33.588369    5271 start.go:729] Will try again in 5 seconds ...
	I0828 10:48:38.590412    5271 start.go:360] acquireMachinesLock for false-160000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:48:38.590716    5271 start.go:364] duration metric: took 234.667µs to acquireMachinesLock for "false-160000"
	I0828 10:48:38.590790    5271 start.go:93] Provisioning new machine with config: &{Name:false-160000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:48:38.591031    5271 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:48:38.601663    5271 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0828 10:48:38.640514    5271 start.go:159] libmachine.API.Create for "false-160000" (driver="qemu2")
	I0828 10:48:38.640564    5271 client.go:168] LocalClient.Create starting
	I0828 10:48:38.640660    5271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:48:38.640714    5271 main.go:141] libmachine: Decoding PEM data...
	I0828 10:48:38.640730    5271 main.go:141] libmachine: Parsing certificate...
	I0828 10:48:38.640784    5271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:48:38.640825    5271 main.go:141] libmachine: Decoding PEM data...
	I0828 10:48:38.640834    5271 main.go:141] libmachine: Parsing certificate...
	I0828 10:48:38.641303    5271 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:48:38.808601    5271 main.go:141] libmachine: Creating SSH key...
	I0828 10:48:38.991612    5271 main.go:141] libmachine: Creating Disk image...
	I0828 10:48:38.991620    5271 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:48:38.991825    5271 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/false-160000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/false-160000/disk.qcow2
	I0828 10:48:39.001458    5271 main.go:141] libmachine: STDOUT: 
	I0828 10:48:39.001477    5271 main.go:141] libmachine: STDERR: 
	I0828 10:48:39.001533    5271 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/false-160000/disk.qcow2 +20000M
	I0828 10:48:39.009656    5271 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:48:39.009672    5271 main.go:141] libmachine: STDERR: 
	I0828 10:48:39.009684    5271 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/false-160000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/false-160000/disk.qcow2
	I0828 10:48:39.009693    5271 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:48:39.009705    5271 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:48:39.009744    5271 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/false-160000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/false-160000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/false-160000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:5a:2f:3e:05:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/false-160000/disk.qcow2
	I0828 10:48:39.011526    5271 main.go:141] libmachine: STDOUT: 
	I0828 10:48:39.011541    5271 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:48:39.011553    5271 client.go:171] duration metric: took 370.996166ms to LocalClient.Create
	I0828 10:48:41.013562    5271 start.go:128] duration metric: took 2.422595666s to createHost
	I0828 10:48:41.013582    5271 start.go:83] releasing machines lock for "false-160000", held for 2.422934834s
	W0828 10:48:41.013701    5271 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-160000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-160000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:48:41.025942    5271 out.go:201] 
	W0828 10:48:41.029935    5271 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:48:41.029941    5271 out.go:270] * 
	* 
	W0828 10:48:41.030391    5271 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:48:41.041840    5271 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-160000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
E0828 10:48:50.782975    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-160000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.96714575s)

                                                
                                                
-- stdout --
	* [kindnet-160000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-160000" primary control-plane node in "kindnet-160000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-160000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:48:43.249168    5382 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:48:43.249309    5382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:48:43.249315    5382 out.go:358] Setting ErrFile to fd 2...
	I0828 10:48:43.249318    5382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:48:43.249449    5382 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:48:43.250554    5382 out.go:352] Setting JSON to false
	I0828 10:48:43.266770    5382 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4687,"bootTime":1724862636,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:48:43.266848    5382 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:48:43.272888    5382 out.go:177] * [kindnet-160000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:48:43.280673    5382 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:48:43.280770    5382 notify.go:220] Checking for updates...
	I0828 10:48:43.287665    5382 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:48:43.291056    5382 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:48:43.294755    5382 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:48:43.295999    5382 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:48:43.298754    5382 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:48:43.302060    5382 config.go:182] Loaded profile config "multinode-223000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:48:43.302131    5382 config.go:182] Loaded profile config "stopped-upgrade-801000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0828 10:48:43.302182    5382 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:48:43.306563    5382 out.go:177] * Using the qemu2 driver based on user configuration
	I0828 10:48:43.313698    5382 start.go:297] selected driver: qemu2
	I0828 10:48:43.313705    5382 start.go:901] validating driver "qemu2" against <nil>
	I0828 10:48:43.313711    5382 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:48:43.316025    5382 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 10:48:43.319792    5382 out.go:177] * Automatically selected the socket_vmnet network
	I0828 10:48:43.322710    5382 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:48:43.322729    5382 cni.go:84] Creating CNI manager for "kindnet"
	I0828 10:48:43.322733    5382 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0828 10:48:43.322758    5382 start.go:340] cluster config:
	{Name:kindnet-160000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:48:43.326125    5382 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:48:43.332636    5382 out.go:177] * Starting "kindnet-160000" primary control-plane node in "kindnet-160000" cluster
	I0828 10:48:43.336716    5382 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:48:43.336734    5382 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:48:43.336740    5382 cache.go:56] Caching tarball of preloaded images
	I0828 10:48:43.336784    5382 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:48:43.336789    5382 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:48:43.336850    5382 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/kindnet-160000/config.json ...
	I0828 10:48:43.336860    5382 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/kindnet-160000/config.json: {Name:mk66d66a1c15a991ffd1d136f6c059303ac99d75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:48:43.337064    5382 start.go:360] acquireMachinesLock for kindnet-160000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:48:43.337093    5382 start.go:364] duration metric: took 23.792µs to acquireMachinesLock for "kindnet-160000"
	I0828 10:48:43.337104    5382 start.go:93] Provisioning new machine with config: &{Name:kindnet-160000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:48:43.337129    5382 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:48:43.344684    5382 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0828 10:48:43.359678    5382 start.go:159] libmachine.API.Create for "kindnet-160000" (driver="qemu2")
	I0828 10:48:43.359702    5382 client.go:168] LocalClient.Create starting
	I0828 10:48:43.359763    5382 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:48:43.359796    5382 main.go:141] libmachine: Decoding PEM data...
	I0828 10:48:43.359807    5382 main.go:141] libmachine: Parsing certificate...
	I0828 10:48:43.359846    5382 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:48:43.359867    5382 main.go:141] libmachine: Decoding PEM data...
	I0828 10:48:43.359874    5382 main.go:141] libmachine: Parsing certificate...
	I0828 10:48:43.360204    5382 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:48:43.523069    5382 main.go:141] libmachine: Creating SSH key...
	I0828 10:48:43.702726    5382 main.go:141] libmachine: Creating Disk image...
	I0828 10:48:43.702736    5382 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:48:43.702959    5382 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kindnet-160000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kindnet-160000/disk.qcow2
	I0828 10:48:43.712953    5382 main.go:141] libmachine: STDOUT: 
	I0828 10:48:43.712974    5382 main.go:141] libmachine: STDERR: 
	I0828 10:48:43.713032    5382 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kindnet-160000/disk.qcow2 +20000M
	I0828 10:48:43.721059    5382 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:48:43.721075    5382 main.go:141] libmachine: STDERR: 
	I0828 10:48:43.721089    5382 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kindnet-160000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kindnet-160000/disk.qcow2
	I0828 10:48:43.721093    5382 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:48:43.721106    5382 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:48:43.721146    5382 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kindnet-160000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kindnet-160000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kindnet-160000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:30:ac:bc:b6:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kindnet-160000/disk.qcow2
	I0828 10:48:43.722792    5382 main.go:141] libmachine: STDOUT: 
	I0828 10:48:43.722811    5382 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:48:43.722827    5382 client.go:171] duration metric: took 363.132833ms to LocalClient.Create
	I0828 10:48:45.724966    5382 start.go:128] duration metric: took 2.387898333s to createHost
	I0828 10:48:45.725022    5382 start.go:83] releasing machines lock for "kindnet-160000", held for 2.388003125s
	W0828 10:48:45.725077    5382 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:48:45.734762    5382 out.go:177] * Deleting "kindnet-160000" in qemu2 ...
	W0828 10:48:45.760911    5382 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:48:45.760932    5382 start.go:729] Will try again in 5 seconds ...
	I0828 10:48:50.762926    5382 start.go:360] acquireMachinesLock for kindnet-160000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:48:50.763240    5382 start.go:364] duration metric: took 238.208µs to acquireMachinesLock for "kindnet-160000"
	I0828 10:48:50.763286    5382 start.go:93] Provisioning new machine with config: &{Name:kindnet-160000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:48:50.763384    5382 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:48:50.770623    5382 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0828 10:48:50.804027    5382 start.go:159] libmachine.API.Create for "kindnet-160000" (driver="qemu2")
	I0828 10:48:50.804082    5382 client.go:168] LocalClient.Create starting
	I0828 10:48:50.804201    5382 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:48:50.804269    5382 main.go:141] libmachine: Decoding PEM data...
	I0828 10:48:50.804286    5382 main.go:141] libmachine: Parsing certificate...
	I0828 10:48:50.804346    5382 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:48:50.804387    5382 main.go:141] libmachine: Decoding PEM data...
	I0828 10:48:50.804398    5382 main.go:141] libmachine: Parsing certificate...
	I0828 10:48:50.804858    5382 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:48:50.973725    5382 main.go:141] libmachine: Creating SSH key...
	I0828 10:48:51.124832    5382 main.go:141] libmachine: Creating Disk image...
	I0828 10:48:51.124843    5382 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:48:51.125033    5382 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kindnet-160000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kindnet-160000/disk.qcow2
	I0828 10:48:51.134241    5382 main.go:141] libmachine: STDOUT: 
	I0828 10:48:51.134268    5382 main.go:141] libmachine: STDERR: 
	I0828 10:48:51.134320    5382 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kindnet-160000/disk.qcow2 +20000M
	I0828 10:48:51.142420    5382 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:48:51.142442    5382 main.go:141] libmachine: STDERR: 
	I0828 10:48:51.142459    5382 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kindnet-160000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kindnet-160000/disk.qcow2
	I0828 10:48:51.142469    5382 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:48:51.142480    5382 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:48:51.142517    5382 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kindnet-160000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kindnet-160000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kindnet-160000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:05:02:3a:37:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kindnet-160000/disk.qcow2
	I0828 10:48:51.144160    5382 main.go:141] libmachine: STDOUT: 
	I0828 10:48:51.144173    5382 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:48:51.144186    5382 client.go:171] duration metric: took 340.110166ms to LocalClient.Create
	I0828 10:48:53.146261    5382 start.go:128] duration metric: took 2.382913416s to createHost
	I0828 10:48:53.146299    5382 start.go:83] releasing machines lock for "kindnet-160000", held for 2.383115583s
	W0828 10:48:53.146452    5382 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-160000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-160000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:48:53.157874    5382 out.go:201] 
	W0828 10:48:53.161854    5382 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:48:53.161861    5382 out.go:270] * 
	* 
	W0828 10:48:53.162485    5382 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:48:53.177858    5382 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-160000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-160000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.791561875s)

                                                
                                                
-- stdout --
	* [flannel-160000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-160000" primary control-plane node in "flannel-160000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-160000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:48:55.450904    5500 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:48:55.451026    5500 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:48:55.451030    5500 out.go:358] Setting ErrFile to fd 2...
	I0828 10:48:55.451032    5500 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:48:55.451174    5500 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:48:55.452254    5500 out.go:352] Setting JSON to false
	I0828 10:48:55.468472    5500 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4699,"bootTime":1724862636,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:48:55.468538    5500 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:48:55.474723    5500 out.go:177] * [flannel-160000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:48:55.482704    5500 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:48:55.482742    5500 notify.go:220] Checking for updates...
	I0828 10:48:55.489560    5500 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:48:55.492660    5500 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:48:55.496497    5500 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:48:55.499633    5500 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:48:55.502656    5500 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:48:55.505889    5500 config.go:182] Loaded profile config "multinode-223000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:48:55.505963    5500 config.go:182] Loaded profile config "stopped-upgrade-801000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0828 10:48:55.506003    5500 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:48:55.510640    5500 out.go:177] * Using the qemu2 driver based on user configuration
	I0828 10:48:55.516604    5500 start.go:297] selected driver: qemu2
	I0828 10:48:55.516611    5500 start.go:901] validating driver "qemu2" against <nil>
	I0828 10:48:55.516619    5500 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:48:55.519072    5500 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 10:48:55.521593    5500 out.go:177] * Automatically selected the socket_vmnet network
	I0828 10:48:55.525724    5500 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:48:55.525762    5500 cni.go:84] Creating CNI manager for "flannel"
	I0828 10:48:55.525767    5500 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0828 10:48:55.525793    5500 start.go:340] cluster config:
	{Name:flannel-160000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:48:55.529701    5500 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:48:55.538636    5500 out.go:177] * Starting "flannel-160000" primary control-plane node in "flannel-160000" cluster
	I0828 10:48:55.542411    5500 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:48:55.542424    5500 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:48:55.542432    5500 cache.go:56] Caching tarball of preloaded images
	I0828 10:48:55.542490    5500 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:48:55.542496    5500 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:48:55.542555    5500 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/flannel-160000/config.json ...
	I0828 10:48:55.542568    5500 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/flannel-160000/config.json: {Name:mkc375a0735b754c318360679b3cafa8c69a6019 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:48:55.542808    5500 start.go:360] acquireMachinesLock for flannel-160000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:48:55.542848    5500 start.go:364] duration metric: took 33.25µs to acquireMachinesLock for "flannel-160000"
	I0828 10:48:55.542861    5500 start.go:93] Provisioning new machine with config: &{Name:flannel-160000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:48:55.542888    5500 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:48:55.550642    5500 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0828 10:48:55.568702    5500 start.go:159] libmachine.API.Create for "flannel-160000" (driver="qemu2")
	I0828 10:48:55.568732    5500 client.go:168] LocalClient.Create starting
	I0828 10:48:55.568803    5500 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:48:55.568838    5500 main.go:141] libmachine: Decoding PEM data...
	I0828 10:48:55.568847    5500 main.go:141] libmachine: Parsing certificate...
	I0828 10:48:55.568890    5500 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:48:55.568919    5500 main.go:141] libmachine: Decoding PEM data...
	I0828 10:48:55.568928    5500 main.go:141] libmachine: Parsing certificate...
	I0828 10:48:55.569324    5500 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:48:55.729394    5500 main.go:141] libmachine: Creating SSH key...
	I0828 10:48:55.791145    5500 main.go:141] libmachine: Creating Disk image...
	I0828 10:48:55.791150    5500 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:48:55.791322    5500 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/flannel-160000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/flannel-160000/disk.qcow2
	I0828 10:48:55.801219    5500 main.go:141] libmachine: STDOUT: 
	I0828 10:48:55.801247    5500 main.go:141] libmachine: STDERR: 
	I0828 10:48:55.801310    5500 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/flannel-160000/disk.qcow2 +20000M
	I0828 10:48:55.809616    5500 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:48:55.809638    5500 main.go:141] libmachine: STDERR: 
	I0828 10:48:55.809656    5500 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/flannel-160000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/flannel-160000/disk.qcow2
	I0828 10:48:55.809661    5500 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:48:55.809670    5500 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:48:55.809696    5500 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/flannel-160000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/flannel-160000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/flannel-160000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:23:14:16:eb:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/flannel-160000/disk.qcow2
	I0828 10:48:55.811354    5500 main.go:141] libmachine: STDOUT: 
	I0828 10:48:55.811367    5500 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:48:55.811387    5500 client.go:171] duration metric: took 242.658292ms to LocalClient.Create
	I0828 10:48:57.813538    5500 start.go:128] duration metric: took 2.270694708s to createHost
	I0828 10:48:57.813618    5500 start.go:83] releasing machines lock for "flannel-160000", held for 2.270836959s
	W0828 10:48:57.813782    5500 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:48:57.824072    5500 out.go:177] * Deleting "flannel-160000" in qemu2 ...
	W0828 10:48:57.858643    5500 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:48:57.858675    5500 start.go:729] Will try again in 5 seconds ...
	I0828 10:49:02.860629    5500 start.go:360] acquireMachinesLock for flannel-160000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:49:02.860920    5500 start.go:364] duration metric: took 240.75µs to acquireMachinesLock for "flannel-160000"
	I0828 10:49:02.860997    5500 start.go:93] Provisioning new machine with config: &{Name:flannel-160000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:49:02.861123    5500 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:49:02.870009    5500 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0828 10:49:02.899335    5500 start.go:159] libmachine.API.Create for "flannel-160000" (driver="qemu2")
	I0828 10:49:02.899384    5500 client.go:168] LocalClient.Create starting
	I0828 10:49:02.899478    5500 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:49:02.899525    5500 main.go:141] libmachine: Decoding PEM data...
	I0828 10:49:02.899536    5500 main.go:141] libmachine: Parsing certificate...
	I0828 10:49:02.899592    5500 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:49:02.899623    5500 main.go:141] libmachine: Decoding PEM data...
	I0828 10:49:02.899630    5500 main.go:141] libmachine: Parsing certificate...
	I0828 10:49:02.900049    5500 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:49:03.063019    5500 main.go:141] libmachine: Creating SSH key...
	I0828 10:49:03.156992    5500 main.go:141] libmachine: Creating Disk image...
	I0828 10:49:03.157001    5500 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:49:03.157206    5500 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/flannel-160000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/flannel-160000/disk.qcow2
	I0828 10:49:03.167001    5500 main.go:141] libmachine: STDOUT: 
	I0828 10:49:03.167021    5500 main.go:141] libmachine: STDERR: 
	I0828 10:49:03.167073    5500 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/flannel-160000/disk.qcow2 +20000M
	I0828 10:49:03.175246    5500 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:49:03.175268    5500 main.go:141] libmachine: STDERR: 
	I0828 10:49:03.175275    5500 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/flannel-160000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/flannel-160000/disk.qcow2
	I0828 10:49:03.175279    5500 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:49:03.175291    5500 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:49:03.175327    5500 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/flannel-160000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/flannel-160000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/flannel-160000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:e3:db:c7:c3:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/flannel-160000/disk.qcow2
	I0828 10:49:03.177096    5500 main.go:141] libmachine: STDOUT: 
	I0828 10:49:03.177113    5500 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:49:03.177137    5500 client.go:171] duration metric: took 277.74775ms to LocalClient.Create
	I0828 10:49:05.179156    5500 start.go:128] duration metric: took 2.318100125s to createHost
	I0828 10:49:05.179179    5500 start.go:83] releasing machines lock for "flannel-160000", held for 2.318313666s
	W0828 10:49:05.179305    5500 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-160000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-160000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:49:05.187542    5500 out.go:201] 
	W0828 10:49:05.195510    5500 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:49:05.195515    5500 out.go:270] * 
	* 
	W0828 10:49:05.196030    5500 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:49:05.204430    5500 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-160000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
E0828 10:49:10.219639    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-160000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.733284542s)

                                                
                                                
-- stdout --
	* [enable-default-cni-160000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-160000" primary control-plane node in "enable-default-cni-160000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-160000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:49:07.552018    5619 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:49:07.552149    5619 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:49:07.552155    5619 out.go:358] Setting ErrFile to fd 2...
	I0828 10:49:07.552158    5619 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:49:07.552307    5619 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:49:07.553387    5619 out.go:352] Setting JSON to false
	I0828 10:49:07.569533    5619 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4711,"bootTime":1724862636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:49:07.569609    5619 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:49:07.576551    5619 out.go:177] * [enable-default-cni-160000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:49:07.585376    5619 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:49:07.585417    5619 notify.go:220] Checking for updates...
	I0828 10:49:07.592306    5619 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:49:07.595330    5619 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:49:07.598298    5619 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:49:07.601234    5619 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:49:07.604282    5619 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:49:07.607746    5619 config.go:182] Loaded profile config "multinode-223000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:49:07.607815    5619 config.go:182] Loaded profile config "stopped-upgrade-801000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0828 10:49:07.607863    5619 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:49:07.612224    5619 out.go:177] * Using the qemu2 driver based on user configuration
	I0828 10:49:07.619365    5619 start.go:297] selected driver: qemu2
	I0828 10:49:07.619372    5619 start.go:901] validating driver "qemu2" against <nil>
	I0828 10:49:07.619380    5619 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:49:07.621537    5619 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 10:49:07.626306    5619 out.go:177] * Automatically selected the socket_vmnet network
	E0828 10:49:07.629320    5619 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0828 10:49:07.629331    5619 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:49:07.629364    5619 cni.go:84] Creating CNI manager for "bridge"
	I0828 10:49:07.629368    5619 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 10:49:07.629405    5619 start.go:340] cluster config:
	{Name:enable-default-cni-160000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:49:07.632975    5619 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:49:07.641265    5619 out.go:177] * Starting "enable-default-cni-160000" primary control-plane node in "enable-default-cni-160000" cluster
	I0828 10:49:07.645322    5619 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:49:07.645335    5619 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:49:07.645349    5619 cache.go:56] Caching tarball of preloaded images
	I0828 10:49:07.645401    5619 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:49:07.645406    5619 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:49:07.645475    5619 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/enable-default-cni-160000/config.json ...
	I0828 10:49:07.645486    5619 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/enable-default-cni-160000/config.json: {Name:mk501961b4f47332f65a771667276984213e020a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:49:07.645712    5619 start.go:360] acquireMachinesLock for enable-default-cni-160000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:49:07.645750    5619 start.go:364] duration metric: took 32µs to acquireMachinesLock for "enable-default-cni-160000"
	I0828 10:49:07.645763    5619 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-160000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:49:07.645790    5619 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:49:07.653314    5619 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0828 10:49:07.670244    5619 start.go:159] libmachine.API.Create for "enable-default-cni-160000" (driver="qemu2")
	I0828 10:49:07.670271    5619 client.go:168] LocalClient.Create starting
	I0828 10:49:07.670332    5619 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:49:07.670366    5619 main.go:141] libmachine: Decoding PEM data...
	I0828 10:49:07.670374    5619 main.go:141] libmachine: Parsing certificate...
	I0828 10:49:07.670409    5619 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:49:07.670432    5619 main.go:141] libmachine: Decoding PEM data...
	I0828 10:49:07.670439    5619 main.go:141] libmachine: Parsing certificate...
	I0828 10:49:07.670882    5619 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:49:07.832138    5619 main.go:141] libmachine: Creating SSH key...
	I0828 10:49:07.892992    5619 main.go:141] libmachine: Creating Disk image...
	I0828 10:49:07.892997    5619 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:49:07.893183    5619 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/enable-default-cni-160000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/enable-default-cni-160000/disk.qcow2
	I0828 10:49:07.902493    5619 main.go:141] libmachine: STDOUT: 
	I0828 10:49:07.902513    5619 main.go:141] libmachine: STDERR: 
	I0828 10:49:07.902557    5619 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/enable-default-cni-160000/disk.qcow2 +20000M
	I0828 10:49:07.910521    5619 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:49:07.910536    5619 main.go:141] libmachine: STDERR: 
	I0828 10:49:07.910549    5619 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/enable-default-cni-160000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/enable-default-cni-160000/disk.qcow2
	I0828 10:49:07.910553    5619 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:49:07.910566    5619 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:49:07.910594    5619 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/enable-default-cni-160000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/enable-default-cni-160000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/enable-default-cni-160000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:68:08:b3:75:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/enable-default-cni-160000/disk.qcow2
	I0828 10:49:07.912414    5619 main.go:141] libmachine: STDOUT: 
	I0828 10:49:07.912432    5619 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:49:07.912451    5619 client.go:171] duration metric: took 242.183209ms to LocalClient.Create
	I0828 10:49:09.914582    5619 start.go:128] duration metric: took 2.268842167s to createHost
	I0828 10:49:09.914678    5619 start.go:83] releasing machines lock for "enable-default-cni-160000", held for 2.26899425s
	W0828 10:49:09.914778    5619 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:49:09.924350    5619 out.go:177] * Deleting "enable-default-cni-160000" in qemu2 ...
	W0828 10:49:09.954858    5619 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:49:09.954894    5619 start.go:729] Will try again in 5 seconds ...
	I0828 10:49:14.955916    5619 start.go:360] acquireMachinesLock for enable-default-cni-160000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:49:14.956171    5619 start.go:364] duration metric: took 195.583µs to acquireMachinesLock for "enable-default-cni-160000"
	I0828 10:49:14.956210    5619 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-160000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:49:14.956328    5619 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:49:14.964641    5619 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0828 10:49:14.991380    5619 start.go:159] libmachine.API.Create for "enable-default-cni-160000" (driver="qemu2")
	I0828 10:49:14.991421    5619 client.go:168] LocalClient.Create starting
	I0828 10:49:14.991503    5619 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:49:14.991552    5619 main.go:141] libmachine: Decoding PEM data...
	I0828 10:49:14.991564    5619 main.go:141] libmachine: Parsing certificate...
	I0828 10:49:14.991610    5619 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:49:14.991641    5619 main.go:141] libmachine: Decoding PEM data...
	I0828 10:49:14.991649    5619 main.go:141] libmachine: Parsing certificate...
	I0828 10:49:14.992021    5619 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:49:15.155703    5619 main.go:141] libmachine: Creating SSH key...
	I0828 10:49:15.198255    5619 main.go:141] libmachine: Creating Disk image...
	I0828 10:49:15.198261    5619 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:49:15.198432    5619 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/enable-default-cni-160000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/enable-default-cni-160000/disk.qcow2
	I0828 10:49:15.207783    5619 main.go:141] libmachine: STDOUT: 
	I0828 10:49:15.207801    5619 main.go:141] libmachine: STDERR: 
	I0828 10:49:15.207855    5619 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/enable-default-cni-160000/disk.qcow2 +20000M
	I0828 10:49:15.215838    5619 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:49:15.215852    5619 main.go:141] libmachine: STDERR: 
	I0828 10:49:15.215864    5619 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/enable-default-cni-160000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/enable-default-cni-160000/disk.qcow2
	I0828 10:49:15.215874    5619 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:49:15.215883    5619 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:49:15.215910    5619 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/enable-default-cni-160000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/enable-default-cni-160000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/enable-default-cni-160000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:d5:84:f9:96:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/enable-default-cni-160000/disk.qcow2
	I0828 10:49:15.217528    5619 main.go:141] libmachine: STDOUT: 
	I0828 10:49:15.217545    5619 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:49:15.217559    5619 client.go:171] duration metric: took 226.14125ms to LocalClient.Create
	I0828 10:49:17.219639    5619 start.go:128] duration metric: took 2.263369458s to createHost
	I0828 10:49:17.219699    5619 start.go:83] releasing machines lock for "enable-default-cni-160000", held for 2.263588708s
	W0828 10:49:17.219955    5619 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-160000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-160000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:49:17.227374    5619 out.go:201] 
	W0828 10:49:17.233613    5619 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:49:17.233661    5619 out.go:270] * 
	* 
	W0828 10:49:17.235023    5619 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:49:17.243544    5619 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-160000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-160000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.990780875s)

                                                
                                                
-- stdout --
	* [bridge-160000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-160000" primary control-plane node in "bridge-160000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-160000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:49:19.454611    5732 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:49:19.454732    5732 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:49:19.454736    5732 out.go:358] Setting ErrFile to fd 2...
	I0828 10:49:19.454738    5732 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:49:19.454858    5732 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:49:19.455924    5732 out.go:352] Setting JSON to false
	I0828 10:49:19.472253    5732 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4723,"bootTime":1724862636,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:49:19.472324    5732 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:49:19.479961    5732 out.go:177] * [bridge-160000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:49:19.487740    5732 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:49:19.487796    5732 notify.go:220] Checking for updates...
	I0828 10:49:19.495612    5732 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:49:19.502747    5732 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:49:19.506681    5732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:49:19.509810    5732 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:49:19.512764    5732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:49:19.517130    5732 config.go:182] Loaded profile config "multinode-223000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:49:19.517191    5732 config.go:182] Loaded profile config "stopped-upgrade-801000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0828 10:49:19.517245    5732 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:49:19.520787    5732 out.go:177] * Using the qemu2 driver based on user configuration
	I0828 10:49:19.527724    5732 start.go:297] selected driver: qemu2
	I0828 10:49:19.527729    5732 start.go:901] validating driver "qemu2" against <nil>
	I0828 10:49:19.527735    5732 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:49:19.530083    5732 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 10:49:19.534709    5732 out.go:177] * Automatically selected the socket_vmnet network
	I0828 10:49:19.537963    5732 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:49:19.537985    5732 cni.go:84] Creating CNI manager for "bridge"
	I0828 10:49:19.537989    5732 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 10:49:19.538029    5732 start.go:340] cluster config:
	{Name:bridge-160000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:49:19.541459    5732 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:49:19.549749    5732 out.go:177] * Starting "bridge-160000" primary control-plane node in "bridge-160000" cluster
	I0828 10:49:19.552730    5732 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:49:19.552746    5732 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:49:19.552754    5732 cache.go:56] Caching tarball of preloaded images
	I0828 10:49:19.552813    5732 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:49:19.552818    5732 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:49:19.552893    5732 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/bridge-160000/config.json ...
	I0828 10:49:19.552904    5732 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/bridge-160000/config.json: {Name:mk79913a005f0d8d267afecbae9df083a720f369 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:49:19.553111    5732 start.go:360] acquireMachinesLock for bridge-160000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:49:19.553139    5732 start.go:364] duration metric: took 23.666µs to acquireMachinesLock for "bridge-160000"
	I0828 10:49:19.553150    5732 start.go:93] Provisioning new machine with config: &{Name:bridge-160000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:49:19.553173    5732 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:49:19.560672    5732 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0828 10:49:19.575904    5732 start.go:159] libmachine.API.Create for "bridge-160000" (driver="qemu2")
	I0828 10:49:19.575934    5732 client.go:168] LocalClient.Create starting
	I0828 10:49:19.576001    5732 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:49:19.576029    5732 main.go:141] libmachine: Decoding PEM data...
	I0828 10:49:19.576038    5732 main.go:141] libmachine: Parsing certificate...
	I0828 10:49:19.576076    5732 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:49:19.576104    5732 main.go:141] libmachine: Decoding PEM data...
	I0828 10:49:19.576116    5732 main.go:141] libmachine: Parsing certificate...
	I0828 10:49:19.576456    5732 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:49:19.737715    5732 main.go:141] libmachine: Creating SSH key...
	I0828 10:49:19.848286    5732 main.go:141] libmachine: Creating Disk image...
	I0828 10:49:19.848295    5732 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:49:19.848729    5732 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/bridge-160000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/bridge-160000/disk.qcow2
	I0828 10:49:19.857975    5732 main.go:141] libmachine: STDOUT: 
	I0828 10:49:19.857991    5732 main.go:141] libmachine: STDERR: 
	I0828 10:49:19.858045    5732 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/bridge-160000/disk.qcow2 +20000M
	I0828 10:49:19.865880    5732 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:49:19.865894    5732 main.go:141] libmachine: STDERR: 
	I0828 10:49:19.865908    5732 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/bridge-160000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/bridge-160000/disk.qcow2
	I0828 10:49:19.865913    5732 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:49:19.865925    5732 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:49:19.865951    5732 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/bridge-160000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/bridge-160000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/bridge-160000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:ec:9d:22:cd:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/bridge-160000/disk.qcow2
	I0828 10:49:19.867531    5732 main.go:141] libmachine: STDOUT: 
	I0828 10:49:19.867543    5732 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:49:19.867561    5732 client.go:171] duration metric: took 291.630541ms to LocalClient.Create
	I0828 10:49:21.869671    5732 start.go:128] duration metric: took 2.316552416s to createHost
	I0828 10:49:21.869796    5732 start.go:83] releasing machines lock for "bridge-160000", held for 2.316692708s
	W0828 10:49:21.869885    5732 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:49:21.883844    5732 out.go:177] * Deleting "bridge-160000" in qemu2 ...
	W0828 10:49:21.913845    5732 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:49:21.913868    5732 start.go:729] Will try again in 5 seconds ...
	I0828 10:49:26.915896    5732 start.go:360] acquireMachinesLock for bridge-160000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:49:26.916244    5732 start.go:364] duration metric: took 250.958µs to acquireMachinesLock for "bridge-160000"
	I0828 10:49:26.916331    5732 start.go:93] Provisioning new machine with config: &{Name:bridge-160000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:49:26.916496    5732 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:49:26.928965    5732 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0828 10:49:26.965241    5732 start.go:159] libmachine.API.Create for "bridge-160000" (driver="qemu2")
	I0828 10:49:26.965282    5732 client.go:168] LocalClient.Create starting
	I0828 10:49:26.965405    5732 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:49:26.965468    5732 main.go:141] libmachine: Decoding PEM data...
	I0828 10:49:26.965482    5732 main.go:141] libmachine: Parsing certificate...
	I0828 10:49:26.965545    5732 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:49:26.965584    5732 main.go:141] libmachine: Decoding PEM data...
	I0828 10:49:26.965603    5732 main.go:141] libmachine: Parsing certificate...
	I0828 10:49:26.966309    5732 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:49:27.131458    5732 main.go:141] libmachine: Creating SSH key...
	I0828 10:49:27.345528    5732 main.go:141] libmachine: Creating Disk image...
	I0828 10:49:27.345539    5732 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:49:27.345741    5732 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/bridge-160000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/bridge-160000/disk.qcow2
	I0828 10:49:27.355606    5732 main.go:141] libmachine: STDOUT: 
	I0828 10:49:27.355625    5732 main.go:141] libmachine: STDERR: 
	I0828 10:49:27.355686    5732 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/bridge-160000/disk.qcow2 +20000M
	I0828 10:49:27.363830    5732 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:49:27.363844    5732 main.go:141] libmachine: STDERR: 
	I0828 10:49:27.363870    5732 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/bridge-160000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/bridge-160000/disk.qcow2
	I0828 10:49:27.363876    5732 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:49:27.363891    5732 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:49:27.363928    5732 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/bridge-160000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/bridge-160000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/bridge-160000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:bc:f3:42:ab:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/bridge-160000/disk.qcow2
	I0828 10:49:27.365578    5732 main.go:141] libmachine: STDOUT: 
	I0828 10:49:27.365594    5732 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:49:27.365607    5732 client.go:171] duration metric: took 400.334334ms to LocalClient.Create
	I0828 10:49:29.367975    5732 start.go:128] duration metric: took 2.451520958s to createHost
	I0828 10:49:29.368054    5732 start.go:83] releasing machines lock for "bridge-160000", held for 2.451875375s
	W0828 10:49:29.368466    5732 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-160000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-160000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:49:29.385260    5732 out.go:201] 
	W0828 10:49:29.389165    5732 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:49:29.389201    5732 out.go:270] * 
	* 
	W0828 10:49:29.392380    5732 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:49:29.404165    5732 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-160000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-160000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.832000959s)

                                                
                                                
-- stdout --
	* [kubenet-160000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-160000" primary control-plane node in "kubenet-160000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-160000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:49:31.598678    5845 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:49:31.598876    5845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:49:31.598879    5845 out.go:358] Setting ErrFile to fd 2...
	I0828 10:49:31.598881    5845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:49:31.599018    5845 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:49:31.600100    5845 out.go:352] Setting JSON to false
	I0828 10:49:31.616257    5845 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4735,"bootTime":1724862636,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:49:31.616380    5845 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:49:31.623009    5845 out.go:177] * [kubenet-160000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:49:31.631776    5845 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:49:31.631814    5845 notify.go:220] Checking for updates...
	I0828 10:49:31.637806    5845 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:49:31.645752    5845 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:49:31.653753    5845 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:49:31.657751    5845 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:49:31.661640    5845 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:49:31.665140    5845 config.go:182] Loaded profile config "multinode-223000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:49:31.665206    5845 config.go:182] Loaded profile config "stopped-upgrade-801000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0828 10:49:31.665260    5845 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:49:31.669776    5845 out.go:177] * Using the qemu2 driver based on user configuration
	I0828 10:49:31.675709    5845 start.go:297] selected driver: qemu2
	I0828 10:49:31.675716    5845 start.go:901] validating driver "qemu2" against <nil>
	I0828 10:49:31.675723    5845 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:49:31.677909    5845 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 10:49:31.680736    5845 out.go:177] * Automatically selected the socket_vmnet network
	I0828 10:49:31.688645    5845 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:49:31.688663    5845 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0828 10:49:31.688695    5845 start.go:340] cluster config:
	{Name:kubenet-160000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubenet-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:49:31.692169    5845 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:49:31.700745    5845 out.go:177] * Starting "kubenet-160000" primary control-plane node in "kubenet-160000" cluster
	I0828 10:49:31.704713    5845 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:49:31.704726    5845 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:49:31.704734    5845 cache.go:56] Caching tarball of preloaded images
	I0828 10:49:31.704784    5845 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:49:31.704789    5845 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:49:31.704853    5845 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/kubenet-160000/config.json ...
	I0828 10:49:31.704863    5845 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/kubenet-160000/config.json: {Name:mk0e1824f0dd7b2a4aa96fe3d3bc268b601f489a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:49:31.705309    5845 start.go:360] acquireMachinesLock for kubenet-160000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:49:31.705346    5845 start.go:364] duration metric: took 30.333µs to acquireMachinesLock for "kubenet-160000"
	I0828 10:49:31.705358    5845 start.go:93] Provisioning new machine with config: &{Name:kubenet-160000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:49:31.705380    5845 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:49:31.713764    5845 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0828 10:49:31.728669    5845 start.go:159] libmachine.API.Create for "kubenet-160000" (driver="qemu2")
	I0828 10:49:31.728698    5845 client.go:168] LocalClient.Create starting
	I0828 10:49:31.728767    5845 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:49:31.728803    5845 main.go:141] libmachine: Decoding PEM data...
	I0828 10:49:31.728811    5845 main.go:141] libmachine: Parsing certificate...
	I0828 10:49:31.728847    5845 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:49:31.728870    5845 main.go:141] libmachine: Decoding PEM data...
	I0828 10:49:31.728880    5845 main.go:141] libmachine: Parsing certificate...
	I0828 10:49:31.729282    5845 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:49:31.891720    5845 main.go:141] libmachine: Creating SSH key...
	I0828 10:49:31.981049    5845 main.go:141] libmachine: Creating Disk image...
	I0828 10:49:31.981059    5845 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:49:31.981248    5845 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubenet-160000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubenet-160000/disk.qcow2
	I0828 10:49:31.990648    5845 main.go:141] libmachine: STDOUT: 
	I0828 10:49:31.990669    5845 main.go:141] libmachine: STDERR: 
	I0828 10:49:31.990725    5845 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubenet-160000/disk.qcow2 +20000M
	I0828 10:49:31.998784    5845 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:49:31.998820    5845 main.go:141] libmachine: STDERR: 
	I0828 10:49:31.998835    5845 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubenet-160000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubenet-160000/disk.qcow2
	I0828 10:49:31.998840    5845 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:49:31.998858    5845 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:49:31.998883    5845 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubenet-160000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubenet-160000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubenet-160000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:74:32:d2:64:cf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubenet-160000/disk.qcow2
	I0828 10:49:32.000473    5845 main.go:141] libmachine: STDOUT: 
	I0828 10:49:32.000489    5845 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:49:32.000510    5845 client.go:171] duration metric: took 271.814917ms to LocalClient.Create
	I0828 10:49:34.002589    5845 start.go:128] duration metric: took 2.297276s to createHost
	I0828 10:49:34.002621    5845 start.go:83] releasing machines lock for "kubenet-160000", held for 2.297348625s
	W0828 10:49:34.002666    5845 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:49:34.017478    5845 out.go:177] * Deleting "kubenet-160000" in qemu2 ...
	W0828 10:49:34.033660    5845 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:49:34.033668    5845 start.go:729] Will try again in 5 seconds ...
	I0828 10:49:39.035689    5845 start.go:360] acquireMachinesLock for kubenet-160000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:49:39.036249    5845 start.go:364] duration metric: took 458.959µs to acquireMachinesLock for "kubenet-160000"
	I0828 10:49:39.036425    5845 start.go:93] Provisioning new machine with config: &{Name:kubenet-160000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-160000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:49:39.036808    5845 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:49:39.042474    5845 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0828 10:49:39.085706    5845 start.go:159] libmachine.API.Create for "kubenet-160000" (driver="qemu2")
	I0828 10:49:39.085755    5845 client.go:168] LocalClient.Create starting
	I0828 10:49:39.085881    5845 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:49:39.085944    5845 main.go:141] libmachine: Decoding PEM data...
	I0828 10:49:39.085956    5845 main.go:141] libmachine: Parsing certificate...
	I0828 10:49:39.086018    5845 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:49:39.086058    5845 main.go:141] libmachine: Decoding PEM data...
	I0828 10:49:39.086067    5845 main.go:141] libmachine: Parsing certificate...
	I0828 10:49:39.086585    5845 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:49:39.253872    5845 main.go:141] libmachine: Creating SSH key...
	I0828 10:49:39.344775    5845 main.go:141] libmachine: Creating Disk image...
	I0828 10:49:39.344781    5845 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:49:39.344963    5845 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubenet-160000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubenet-160000/disk.qcow2
	I0828 10:49:39.354406    5845 main.go:141] libmachine: STDOUT: 
	I0828 10:49:39.354425    5845 main.go:141] libmachine: STDERR: 
	I0828 10:49:39.354467    5845 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubenet-160000/disk.qcow2 +20000M
	I0828 10:49:39.362594    5845 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:49:39.362614    5845 main.go:141] libmachine: STDERR: 
	I0828 10:49:39.362642    5845 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubenet-160000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubenet-160000/disk.qcow2
	I0828 10:49:39.362647    5845 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:49:39.362658    5845 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:49:39.362690    5845 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubenet-160000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubenet-160000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubenet-160000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:15:e1:41:99:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/kubenet-160000/disk.qcow2
	I0828 10:49:39.364428    5845 main.go:141] libmachine: STDOUT: 
	I0828 10:49:39.364445    5845 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:49:39.364456    5845 client.go:171] duration metric: took 278.703792ms to LocalClient.Create
	I0828 10:49:41.366473    5845 start.go:128] duration metric: took 2.32972275s to createHost
	I0828 10:49:41.366505    5845 start.go:83] releasing machines lock for "kubenet-160000", held for 2.330290042s
	W0828 10:49:41.366652    5845 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-160000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-160000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:49:41.375967    5845 out.go:201] 
	W0828 10:49:41.380883    5845 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:49:41.380892    5845 out.go:270] * 
	* 
	W0828 10:49:41.381451    5845 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:49:41.393926    5845 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-198000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-198000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.728332s)

                                                
                                                
-- stdout --
	* [old-k8s-version-198000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-198000" primary control-plane node in "old-k8s-version-198000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-198000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:49:43.562008    5959 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:49:43.562129    5959 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:49:43.562133    5959 out.go:358] Setting ErrFile to fd 2...
	I0828 10:49:43.562139    5959 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:49:43.562256    5959 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:49:43.563385    5959 out.go:352] Setting JSON to false
	I0828 10:49:43.580484    5959 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4747,"bootTime":1724862636,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:49:43.580582    5959 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:49:43.586201    5959 out.go:177] * [old-k8s-version-198000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:49:43.594503    5959 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:49:43.594579    5959 notify.go:220] Checking for updates...
	I0828 10:49:43.601382    5959 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:49:43.604395    5959 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:49:43.607489    5959 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:49:43.610399    5959 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:49:43.613427    5959 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:49:43.615241    5959 config.go:182] Loaded profile config "multinode-223000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:49:43.615304    5959 config.go:182] Loaded profile config "stopped-upgrade-801000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0828 10:49:43.615350    5959 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:49:43.619298    5959 out.go:177] * Using the qemu2 driver based on user configuration
	I0828 10:49:43.626211    5959 start.go:297] selected driver: qemu2
	I0828 10:49:43.626219    5959 start.go:901] validating driver "qemu2" against <nil>
	I0828 10:49:43.626238    5959 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:49:43.628523    5959 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 10:49:43.631377    5959 out.go:177] * Automatically selected the socket_vmnet network
	I0828 10:49:43.634429    5959 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:49:43.634445    5959 cni.go:84] Creating CNI manager for ""
	I0828 10:49:43.634451    5959 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0828 10:49:43.634476    5959 start.go:340] cluster config:
	{Name:old-k8s-version-198000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-198000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:49:43.637911    5959 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:49:43.645408    5959 out.go:177] * Starting "old-k8s-version-198000" primary control-plane node in "old-k8s-version-198000" cluster
	I0828 10:49:43.649372    5959 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0828 10:49:43.649387    5959 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0828 10:49:43.649396    5959 cache.go:56] Caching tarball of preloaded images
	I0828 10:49:43.649451    5959 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:49:43.649459    5959 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0828 10:49:43.649525    5959 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/old-k8s-version-198000/config.json ...
	I0828 10:49:43.649537    5959 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/old-k8s-version-198000/config.json: {Name:mkc59afd459aef1d9e3fb5cce30b62ab58a70356 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:49:43.649760    5959 start.go:360] acquireMachinesLock for old-k8s-version-198000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:49:43.649792    5959 start.go:364] duration metric: took 23.958µs to acquireMachinesLock for "old-k8s-version-198000"
	I0828 10:49:43.649803    5959 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-198000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-198000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:49:43.649826    5959 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:49:43.658454    5959 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 10:49:43.673633    5959 start.go:159] libmachine.API.Create for "old-k8s-version-198000" (driver="qemu2")
	I0828 10:49:43.673669    5959 client.go:168] LocalClient.Create starting
	I0828 10:49:43.673752    5959 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:49:43.673786    5959 main.go:141] libmachine: Decoding PEM data...
	I0828 10:49:43.673797    5959 main.go:141] libmachine: Parsing certificate...
	I0828 10:49:43.673833    5959 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:49:43.673855    5959 main.go:141] libmachine: Decoding PEM data...
	I0828 10:49:43.673862    5959 main.go:141] libmachine: Parsing certificate...
	I0828 10:49:43.674196    5959 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:49:43.836853    5959 main.go:141] libmachine: Creating SSH key...
	I0828 10:49:43.872560    5959 main.go:141] libmachine: Creating Disk image...
	I0828 10:49:43.872569    5959 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:49:43.872748    5959 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/disk.qcow2
	I0828 10:49:43.881964    5959 main.go:141] libmachine: STDOUT: 
	I0828 10:49:43.881982    5959 main.go:141] libmachine: STDERR: 
	I0828 10:49:43.882032    5959 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/disk.qcow2 +20000M
	I0828 10:49:43.889831    5959 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:49:43.889852    5959 main.go:141] libmachine: STDERR: 
	I0828 10:49:43.889863    5959 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/disk.qcow2
	I0828 10:49:43.889866    5959 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:49:43.889880    5959 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:49:43.889912    5959 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:db:38:d3:a6:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/disk.qcow2
	I0828 10:49:43.891519    5959 main.go:141] libmachine: STDOUT: 
	I0828 10:49:43.891542    5959 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:49:43.891559    5959 client.go:171] duration metric: took 217.892542ms to LocalClient.Create
	I0828 10:49:45.893678    5959 start.go:128] duration metric: took 2.24390325s to createHost
	I0828 10:49:45.893794    5959 start.go:83] releasing machines lock for "old-k8s-version-198000", held for 2.244069667s
	W0828 10:49:45.893874    5959 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:49:45.903736    5959 out.go:177] * Deleting "old-k8s-version-198000" in qemu2 ...
	W0828 10:49:45.933445    5959 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:49:45.933470    5959 start.go:729] Will try again in 5 seconds ...
	I0828 10:49:50.935521    5959 start.go:360] acquireMachinesLock for old-k8s-version-198000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:49:50.935849    5959 start.go:364] duration metric: took 248.417µs to acquireMachinesLock for "old-k8s-version-198000"
	I0828 10:49:50.935936    5959 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-198000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-198000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:49:50.936039    5959 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:49:50.945436    5959 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 10:49:50.980288    5959 start.go:159] libmachine.API.Create for "old-k8s-version-198000" (driver="qemu2")
	I0828 10:49:50.980341    5959 client.go:168] LocalClient.Create starting
	I0828 10:49:50.980426    5959 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:49:50.980486    5959 main.go:141] libmachine: Decoding PEM data...
	I0828 10:49:50.980503    5959 main.go:141] libmachine: Parsing certificate...
	I0828 10:49:50.980557    5959 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:49:50.980591    5959 main.go:141] libmachine: Decoding PEM data...
	I0828 10:49:50.980604    5959 main.go:141] libmachine: Parsing certificate...
	I0828 10:49:50.981105    5959 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:49:51.146420    5959 main.go:141] libmachine: Creating SSH key...
	I0828 10:49:51.205510    5959 main.go:141] libmachine: Creating Disk image...
	I0828 10:49:51.205517    5959 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:49:51.205708    5959 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/disk.qcow2
	I0828 10:49:51.215015    5959 main.go:141] libmachine: STDOUT: 
	I0828 10:49:51.215034    5959 main.go:141] libmachine: STDERR: 
	I0828 10:49:51.215079    5959 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/disk.qcow2 +20000M
	I0828 10:49:51.223013    5959 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:49:51.223030    5959 main.go:141] libmachine: STDERR: 
	I0828 10:49:51.223049    5959 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/disk.qcow2
	I0828 10:49:51.223054    5959 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:49:51.223065    5959 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:49:51.223093    5959 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:36:70:bf:cd:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/disk.qcow2
	I0828 10:49:51.224821    5959 main.go:141] libmachine: STDOUT: 
	I0828 10:49:51.224839    5959 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:49:51.224853    5959 client.go:171] duration metric: took 244.515209ms to LocalClient.Create
	I0828 10:49:53.226952    5959 start.go:128] duration metric: took 2.290955209s to createHost
	I0828 10:49:53.227032    5959 start.go:83] releasing machines lock for "old-k8s-version-198000", held for 2.291241417s
	W0828 10:49:53.227313    5959 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-198000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-198000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:49:53.234943    5959 out.go:201] 
	W0828 10:49:53.239096    5959 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:49:53.239137    5959 out.go:270] * 
	* 
	W0828 10:49:53.240598    5959 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:49:53.255842    5959 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-198000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-198000 -n old-k8s-version-198000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-198000 -n old-k8s-version-198000: exit status 7 (43.839209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-198000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-198000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-198000 create -f testdata/busybox.yaml: exit status 1 (27.90775ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-198000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-198000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-198000 -n old-k8s-version-198000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-198000 -n old-k8s-version-198000: exit status 7 (30.417292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-198000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-198000 -n old-k8s-version-198000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-198000 -n old-k8s-version-198000: exit status 7 (29.254ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-198000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-198000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-198000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-198000 describe deploy/metrics-server -n kube-system: exit status 1 (27.494334ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-198000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-198000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-198000 -n old-k8s-version-198000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-198000 -n old-k8s-version-198000: exit status 7 (29.479709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-198000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-198000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-198000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.190636667s)

                                                
                                                
-- stdout --
	* [old-k8s-version-198000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-198000" primary control-plane node in "old-k8s-version-198000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-198000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-198000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:49:55.792317    6008 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:49:55.792460    6008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:49:55.792464    6008 out.go:358] Setting ErrFile to fd 2...
	I0828 10:49:55.792466    6008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:49:55.792605    6008 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:49:55.793622    6008 out.go:352] Setting JSON to false
	I0828 10:49:55.810330    6008 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4759,"bootTime":1724862636,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:49:55.810404    6008 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:49:55.814611    6008 out.go:177] * [old-k8s-version-198000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:49:55.822853    6008 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:49:55.822917    6008 notify.go:220] Checking for updates...
	I0828 10:49:55.829748    6008 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:49:55.832794    6008 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:49:55.835814    6008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:49:55.838729    6008 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:49:55.841822    6008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:49:55.845055    6008 config.go:182] Loaded profile config "old-k8s-version-198000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0828 10:49:55.848721    6008 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0828 10:49:55.851780    6008 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:49:55.856662    6008 out.go:177] * Using the qemu2 driver based on existing profile
	I0828 10:49:55.863787    6008 start.go:297] selected driver: qemu2
	I0828 10:49:55.863798    6008 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-198000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-198000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:49:55.863867    6008 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:49:55.866464    6008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:49:55.866506    6008 cni.go:84] Creating CNI manager for ""
	I0828 10:49:55.866519    6008 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0828 10:49:55.866540    6008 start.go:340] cluster config:
	{Name:old-k8s-version-198000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-198000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:49:55.870277    6008 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:49:55.878743    6008 out.go:177] * Starting "old-k8s-version-198000" primary control-plane node in "old-k8s-version-198000" cluster
	I0828 10:49:55.882697    6008 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0828 10:49:55.882714    6008 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0828 10:49:55.882725    6008 cache.go:56] Caching tarball of preloaded images
	I0828 10:49:55.882794    6008 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:49:55.882800    6008 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0828 10:49:55.882867    6008 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/old-k8s-version-198000/config.json ...
	I0828 10:49:55.883402    6008 start.go:360] acquireMachinesLock for old-k8s-version-198000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:49:55.883432    6008 start.go:364] duration metric: took 24.792µs to acquireMachinesLock for "old-k8s-version-198000"
	I0828 10:49:55.883442    6008 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:49:55.883449    6008 fix.go:54] fixHost starting: 
	I0828 10:49:55.883561    6008 fix.go:112] recreateIfNeeded on old-k8s-version-198000: state=Stopped err=<nil>
	W0828 10:49:55.883569    6008 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:49:55.887753    6008 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-198000" ...
	I0828 10:49:55.895770    6008 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:49:55.895805    6008 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:36:70:bf:cd:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/disk.qcow2
	I0828 10:49:55.897762    6008 main.go:141] libmachine: STDOUT: 
	I0828 10:49:55.897783    6008 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:49:55.897810    6008 fix.go:56] duration metric: took 14.363209ms for fixHost
	I0828 10:49:55.897816    6008 start.go:83] releasing machines lock for "old-k8s-version-198000", held for 14.379042ms
	W0828 10:49:55.897823    6008 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:49:55.897852    6008 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:49:55.897856    6008 start.go:729] Will try again in 5 seconds ...
	I0828 10:50:00.898519    6008 start.go:360] acquireMachinesLock for old-k8s-version-198000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:50:00.898603    6008 start.go:364] duration metric: took 65.333µs to acquireMachinesLock for "old-k8s-version-198000"
	I0828 10:50:00.898620    6008 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:50:00.898624    6008 fix.go:54] fixHost starting: 
	I0828 10:50:00.898783    6008 fix.go:112] recreateIfNeeded on old-k8s-version-198000: state=Stopped err=<nil>
	W0828 10:50:00.898788    6008 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:50:00.907078    6008 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-198000" ...
	I0828 10:50:00.915288    6008 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:50:00.915330    6008 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:36:70:bf:cd:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/old-k8s-version-198000/disk.qcow2
	I0828 10:50:00.917498    6008 main.go:141] libmachine: STDOUT: 
	I0828 10:50:00.917512    6008 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:50:00.917530    6008 fix.go:56] duration metric: took 18.906292ms for fixHost
	I0828 10:50:00.917535    6008 start.go:83] releasing machines lock for "old-k8s-version-198000", held for 18.92575ms
	W0828 10:50:00.917580    6008 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-198000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-198000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:50:00.925221    6008 out.go:201] 
	W0828 10:50:00.932272    6008 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:50:00.932279    6008 out.go:270] * 
	* 
	W0828 10:50:00.932729    6008 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:50:00.942305    6008 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-198000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-198000 -n old-k8s-version-198000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-198000 -n old-k8s-version-198000: exit status 7 (30.637542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-198000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-198000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-198000 -n old-k8s-version-198000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-198000 -n old-k8s-version-198000: exit status 7 (31.48825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-198000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-198000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-198000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-198000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.714083ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-198000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-198000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-198000 -n old-k8s-version-198000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-198000 -n old-k8s-version-198000: exit status 7 (30.486625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-198000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-198000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-198000 -n old-k8s-version-198000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-198000 -n old-k8s-version-198000: exit status 7 (29.939458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-198000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-198000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-198000 --alsologtostderr -v=1: exit status 83 (48.664792ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-198000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-198000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:50:01.190544    6029 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:50:01.194405    6029 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:01.194410    6029 out.go:358] Setting ErrFile to fd 2...
	I0828 10:50:01.194412    6029 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:01.194552    6029 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:50:01.197790    6029 out.go:352] Setting JSON to false
	I0828 10:50:01.197800    6029 mustload.go:65] Loading cluster: old-k8s-version-198000
	I0828 10:50:01.198033    6029 config.go:182] Loaded profile config "old-k8s-version-198000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0828 10:50:01.203312    6029 out.go:177] * The control-plane node old-k8s-version-198000 host is not running: state=Stopped
	I0828 10:50:01.207197    6029 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-198000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-198000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-198000 -n old-k8s-version-198000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-198000 -n old-k8s-version-198000: exit status 7 (30.090583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-198000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-198000 -n old-k8s-version-198000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-198000 -n old-k8s-version-198000: exit status 7 (30.513583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-198000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-178000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-178000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.7901185s)

                                                
                                                
-- stdout --
	* [no-preload-178000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-178000" primary control-plane node in "no-preload-178000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-178000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:50:01.555046    6048 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:50:01.555192    6048 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:01.555195    6048 out.go:358] Setting ErrFile to fd 2...
	I0828 10:50:01.555198    6048 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:01.555339    6048 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:50:01.556419    6048 out.go:352] Setting JSON to false
	I0828 10:50:01.573116    6048 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4765,"bootTime":1724862636,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:50:01.573188    6048 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:50:01.577316    6048 out.go:177] * [no-preload-178000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:50:01.584155    6048 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:50:01.584227    6048 notify.go:220] Checking for updates...
	I0828 10:50:01.591294    6048 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:50:01.594270    6048 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:50:01.597238    6048 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:50:01.600280    6048 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:50:01.601822    6048 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:50:01.605578    6048 config.go:182] Loaded profile config "multinode-223000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:50:01.605654    6048 config.go:182] Loaded profile config "stopped-upgrade-801000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0828 10:50:01.605695    6048 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:50:01.609298    6048 out.go:177] * Using the qemu2 driver based on user configuration
	I0828 10:50:01.615232    6048 start.go:297] selected driver: qemu2
	I0828 10:50:01.615238    6048 start.go:901] validating driver "qemu2" against <nil>
	I0828 10:50:01.615244    6048 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:50:01.617524    6048 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 10:50:01.621256    6048 out.go:177] * Automatically selected the socket_vmnet network
	I0828 10:50:01.622581    6048 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:50:01.622617    6048 cni.go:84] Creating CNI manager for ""
	I0828 10:50:01.622628    6048 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:50:01.622633    6048 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 10:50:01.622660    6048 start.go:340] cluster config:
	{Name:no-preload-178000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-178000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:50:01.626342    6048 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:50:01.635304    6048 out.go:177] * Starting "no-preload-178000" primary control-plane node in "no-preload-178000" cluster
	I0828 10:50:01.639256    6048 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:50:01.639325    6048 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/no-preload-178000/config.json ...
	I0828 10:50:01.639339    6048 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/no-preload-178000/config.json: {Name:mkd6aa60de723405c8350cb331378249facc3594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:50:01.639353    6048 cache.go:107] acquiring lock: {Name:mk66997ddcc8265d49bd337f07be40d6e3f18ebe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:50:01.639353    6048 cache.go:107] acquiring lock: {Name:mkf538eb0d7aa9fae1b842e5b9bb6f64b5f3d04f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:50:01.639363    6048 cache.go:107] acquiring lock: {Name:mk2355a5afa8d668cf9c2c1b6435e64e12749a38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:50:01.639379    6048 cache.go:107] acquiring lock: {Name:mkb4eb9196d597749a10edbc265951542a0ec79e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:50:01.639389    6048 cache.go:107] acquiring lock: {Name:mk154cf4ba61ef3b574ceae62486e173cbb6ab2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:50:01.639420    6048 cache.go:115] /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0828 10:50:01.639428    6048 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 77.583µs
	I0828 10:50:01.639442    6048 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0828 10:50:01.639449    6048 cache.go:107] acquiring lock: {Name:mk6c29bb2b5a9e8f1463ba928bb4b568f095af40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:50:01.639536    6048 cache.go:107] acquiring lock: {Name:mk164215a1dba98d463a2409d338fc5024929718 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:50:01.639546    6048 cache.go:107] acquiring lock: {Name:mk20ccce6c1cbdee66c8de90bc7358df0f79729a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:50:01.639569    6048 start.go:360] acquireMachinesLock for no-preload-178000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:50:01.639599    6048 start.go:364] duration metric: took 24.333µs to acquireMachinesLock for "no-preload-178000"
	I0828 10:50:01.639611    6048 start.go:93] Provisioning new machine with config: &{Name:no-preload-178000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-178000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:50:01.639662    6048 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:50:01.639677    6048 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0828 10:50:01.639813    6048 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0828 10:50:01.639820    6048 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0828 10:50:01.639735    6048 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 10:50:01.640184    6048 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0828 10:50:01.643792    6048 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0828 10:50:01.643818    6048 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0828 10:50:01.647234    6048 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 10:50:01.649824    6048 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0828 10:50:01.649892    6048 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0828 10:50:01.650429    6048 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0828 10:50:01.650535    6048 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0828 10:50:01.650545    6048 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0828 10:50:01.650602    6048 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 10:50:01.650615    6048 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0828 10:50:01.663108    6048 start.go:159] libmachine.API.Create for "no-preload-178000" (driver="qemu2")
	I0828 10:50:01.663128    6048 client.go:168] LocalClient.Create starting
	I0828 10:50:01.663191    6048 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:50:01.663220    6048 main.go:141] libmachine: Decoding PEM data...
	I0828 10:50:01.663231    6048 main.go:141] libmachine: Parsing certificate...
	I0828 10:50:01.663273    6048 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:50:01.663296    6048 main.go:141] libmachine: Decoding PEM data...
	I0828 10:50:01.663309    6048 main.go:141] libmachine: Parsing certificate...
	I0828 10:50:01.663682    6048 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:50:01.827315    6048 main.go:141] libmachine: Creating SSH key...
	I0828 10:50:01.897145    6048 main.go:141] libmachine: Creating Disk image...
	I0828 10:50:01.897161    6048 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:50:01.897358    6048 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/disk.qcow2
	I0828 10:50:01.907083    6048 main.go:141] libmachine: STDOUT: 
	I0828 10:50:01.907112    6048 main.go:141] libmachine: STDERR: 
	I0828 10:50:01.907168    6048 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/disk.qcow2 +20000M
	I0828 10:50:01.916408    6048 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:50:01.916438    6048 main.go:141] libmachine: STDERR: 
	I0828 10:50:01.916461    6048 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/disk.qcow2
	I0828 10:50:01.916467    6048 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:50:01.916481    6048 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:50:01.916507    6048 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:94:85:01:0c:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/disk.qcow2
	I0828 10:50:01.918502    6048 main.go:141] libmachine: STDOUT: 
	I0828 10:50:01.918521    6048 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:50:01.918541    6048 client.go:171] duration metric: took 255.417708ms to LocalClient.Create
	I0828 10:50:02.600447    6048 cache.go:162] opening:  /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0828 10:50:02.636107    6048 cache.go:162] opening:  /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0828 10:50:02.643107    6048 cache.go:162] opening:  /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0
	I0828 10:50:02.679876    6048 cache.go:162] opening:  /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0
	I0828 10:50:02.768915    6048 cache.go:157] /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0828 10:50:02.768977    6048 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 1.129496s
	I0828 10:50:02.769005    6048 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0828 10:50:02.829156    6048 cache.go:162] opening:  /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0
	I0828 10:50:02.833199    6048 cache.go:162] opening:  /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0828 10:50:02.841511    6048 cache.go:162] opening:  /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0828 10:50:03.918713    6048 start.go:128] duration metric: took 2.279105625s to createHost
	I0828 10:50:03.918775    6048 start.go:83] releasing machines lock for "no-preload-178000", held for 2.279246625s
	W0828 10:50:03.918831    6048 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:50:03.932745    6048 out.go:177] * Deleting "no-preload-178000" in qemu2 ...
	W0828 10:50:03.955928    6048 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:50:03.955953    6048 start.go:729] Will try again in 5 seconds ...
	I0828 10:50:05.996260    6048 cache.go:157] /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0828 10:50:05.996309    6048 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 4.357003458s
	I0828 10:50:05.996362    6048 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0828 10:50:06.029394    6048 cache.go:157] /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0828 10:50:06.029463    6048 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.390226916s
	I0828 10:50:06.029485    6048 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0828 10:50:06.480313    6048 cache.go:157] /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0828 10:50:06.480342    6048 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 4.841153625s
	I0828 10:50:06.480360    6048 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0828 10:50:06.878805    6048 cache.go:157] /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0828 10:50:06.878839    6048 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 5.239623792s
	I0828 10:50:06.878859    6048 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0828 10:50:06.912635    6048 cache.go:157] /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0828 10:50:06.912683    6048 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 5.273334959s
	I0828 10:50:06.912709    6048 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0828 10:50:08.955950    6048 start.go:360] acquireMachinesLock for no-preload-178000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:50:08.956370    6048 start.go:364] duration metric: took 361.917µs to acquireMachinesLock for "no-preload-178000"
	I0828 10:50:08.956469    6048 start.go:93] Provisioning new machine with config: &{Name:no-preload-178000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-178000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:50:08.956620    6048 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:50:08.966068    6048 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 10:50:09.009829    6048 start.go:159] libmachine.API.Create for "no-preload-178000" (driver="qemu2")
	I0828 10:50:09.009893    6048 client.go:168] LocalClient.Create starting
	I0828 10:50:09.010040    6048 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:50:09.010106    6048 main.go:141] libmachine: Decoding PEM data...
	I0828 10:50:09.010124    6048 main.go:141] libmachine: Parsing certificate...
	I0828 10:50:09.010187    6048 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:50:09.010247    6048 main.go:141] libmachine: Decoding PEM data...
	I0828 10:50:09.010262    6048 main.go:141] libmachine: Parsing certificate...
	I0828 10:50:09.010726    6048 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:50:09.180162    6048 main.go:141] libmachine: Creating SSH key...
	I0828 10:50:09.258654    6048 main.go:141] libmachine: Creating Disk image...
	I0828 10:50:09.258663    6048 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:50:09.258861    6048 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/disk.qcow2
	I0828 10:50:09.268258    6048 main.go:141] libmachine: STDOUT: 
	I0828 10:50:09.268279    6048 main.go:141] libmachine: STDERR: 
	I0828 10:50:09.268337    6048 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/disk.qcow2 +20000M
	I0828 10:50:09.276452    6048 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:50:09.276471    6048 main.go:141] libmachine: STDERR: 
	I0828 10:50:09.276487    6048 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/disk.qcow2
	I0828 10:50:09.276491    6048 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:50:09.276499    6048 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:50:09.276538    6048 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:34:61:da:6a:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/disk.qcow2
	I0828 10:50:09.278295    6048 main.go:141] libmachine: STDOUT: 
	I0828 10:50:09.278311    6048 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:50:09.278324    6048 client.go:171] duration metric: took 268.435292ms to LocalClient.Create
	I0828 10:50:10.573055    6048 cache.go:157] /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0828 10:50:10.573115    6048 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.93405375s
	I0828 10:50:10.573137    6048 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0828 10:50:10.573186    6048 cache.go:87] Successfully saved all images to host disk.
	I0828 10:50:11.280410    6048 start.go:128] duration metric: took 2.323850875s to createHost
	I0828 10:50:11.280437    6048 start.go:83] releasing machines lock for "no-preload-178000", held for 2.3241305s
	W0828 10:50:11.280585    6048 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-178000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-178000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:50:11.290038    6048 out.go:201] 
	W0828 10:50:11.294118    6048 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:50:11.294125    6048 out.go:270] * 
	* 
	W0828 10:50:11.294749    6048 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:50:11.308980    6048 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-178000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-178000 -n no-preload-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-178000 -n no-preload-178000: exit status 7 (35.632166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-178000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-178000 create -f testdata/busybox.yaml: exit status 1 (28.640833ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-178000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-178000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-178000 -n no-preload-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-178000 -n no-preload-178000: exit status 7 (28.873625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-178000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-178000 -n no-preload-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-178000 -n no-preload-178000: exit status 7 (33.668ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-178000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-178000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-178000 describe deploy/metrics-server -n kube-system: exit status 1 (27.360625ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-178000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-178000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-178000 -n no-preload-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-178000 -n no-preload-178000: exit status 7 (29.802708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-178000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-178000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.19639325s)

                                                
                                                
-- stdout --
	* [no-preload-178000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-178000" primary control-plane node in "no-preload-178000" cluster
	* Restarting existing qemu2 VM for "no-preload-178000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-178000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:50:14.786499    6125 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:50:14.786657    6125 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:14.786660    6125 out.go:358] Setting ErrFile to fd 2...
	I0828 10:50:14.786662    6125 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:14.786784    6125 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:50:14.787811    6125 out.go:352] Setting JSON to false
	I0828 10:50:14.803980    6125 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4778,"bootTime":1724862636,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:50:14.804067    6125 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:50:14.809033    6125 out.go:177] * [no-preload-178000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:50:14.814015    6125 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:50:14.814104    6125 notify.go:220] Checking for updates...
	I0828 10:50:14.821901    6125 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:50:14.825007    6125 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:50:14.828029    6125 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:50:14.831044    6125 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:50:14.833988    6125 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:50:14.837240    6125 config.go:182] Loaded profile config "no-preload-178000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:50:14.837475    6125 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:50:14.841962    6125 out.go:177] * Using the qemu2 driver based on existing profile
	I0828 10:50:14.849048    6125 start.go:297] selected driver: qemu2
	I0828 10:50:14.849054    6125 start.go:901] validating driver "qemu2" against &{Name:no-preload-178000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:no-preload-178000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:50:14.849126    6125 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:50:14.851309    6125 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:50:14.851355    6125 cni.go:84] Creating CNI manager for ""
	I0828 10:50:14.851363    6125 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:50:14.851381    6125 start.go:340] cluster config:
	{Name:no-preload-178000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-178000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:50:14.854899    6125 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:50:14.863013    6125 out.go:177] * Starting "no-preload-178000" primary control-plane node in "no-preload-178000" cluster
	I0828 10:50:14.866843    6125 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:50:14.866946    6125 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/no-preload-178000/config.json ...
	I0828 10:50:14.866939    6125 cache.go:107] acquiring lock: {Name:mkf538eb0d7aa9fae1b842e5b9bb6f64b5f3d04f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:50:14.866954    6125 cache.go:107] acquiring lock: {Name:mk20ccce6c1cbdee66c8de90bc7358df0f79729a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:50:14.866979    6125 cache.go:107] acquiring lock: {Name:mk6c29bb2b5a9e8f1463ba928bb4b568f095af40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:50:14.867002    6125 cache.go:115] /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0828 10:50:14.867006    6125 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 74.167µs
	I0828 10:50:14.867012    6125 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0828 10:50:14.867017    6125 cache.go:115] /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0828 10:50:14.867019    6125 cache.go:107] acquiring lock: {Name:mk66997ddcc8265d49bd337f07be40d6e3f18ebe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:50:14.867024    6125 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 78.209µs
	I0828 10:50:14.867028    6125 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0828 10:50:14.867031    6125 cache.go:115] /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0828 10:50:14.867038    6125 cache.go:107] acquiring lock: {Name:mk2355a5afa8d668cf9c2c1b6435e64e12749a38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:50:14.867039    6125 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 96.958µs
	I0828 10:50:14.867053    6125 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0828 10:50:14.867062    6125 cache.go:115] /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0828 10:50:14.867062    6125 cache.go:107] acquiring lock: {Name:mk164215a1dba98d463a2409d338fc5024929718 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:50:14.867075    6125 cache.go:115] /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0828 10:50:14.867080    6125 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 43.208µs
	I0828 10:50:14.867066    6125 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 47.209µs
	I0828 10:50:14.867097    6125 cache.go:115] /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0828 10:50:14.867103    6125 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 42µs
	I0828 10:50:14.867084    6125 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0828 10:50:14.867071    6125 cache.go:107] acquiring lock: {Name:mkb4eb9196d597749a10edbc265951542a0ec79e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:50:14.867121    6125 cache.go:107] acquiring lock: {Name:mk154cf4ba61ef3b574ceae62486e173cbb6ab2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:50:14.867108    6125 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0828 10:50:14.867159    6125 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0828 10:50:14.867172    6125 cache.go:115] /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0828 10:50:14.867172    6125 cache.go:115] /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0828 10:50:14.867178    6125 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 107.334µs
	I0828 10:50:14.867182    6125 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0828 10:50:14.867181    6125 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 71.416µs
	I0828 10:50:14.867186    6125 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0828 10:50:14.867190    6125 cache.go:87] Successfully saved all images to host disk.
	I0828 10:50:14.867380    6125 start.go:360] acquireMachinesLock for no-preload-178000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:50:14.867415    6125 start.go:364] duration metric: took 28.625µs to acquireMachinesLock for "no-preload-178000"
	I0828 10:50:14.867427    6125 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:50:14.867434    6125 fix.go:54] fixHost starting: 
	I0828 10:50:14.867538    6125 fix.go:112] recreateIfNeeded on no-preload-178000: state=Stopped err=<nil>
	W0828 10:50:14.867545    6125 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:50:14.875796    6125 out.go:177] * Restarting existing qemu2 VM for "no-preload-178000" ...
	I0828 10:50:14.879935    6125 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:50:14.879966    6125 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:34:61:da:6a:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/disk.qcow2
	I0828 10:50:14.881767    6125 main.go:141] libmachine: STDOUT: 
	I0828 10:50:14.881785    6125 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:50:14.881810    6125 fix.go:56] duration metric: took 14.378708ms for fixHost
	I0828 10:50:14.881814    6125 start.go:83] releasing machines lock for "no-preload-178000", held for 14.396208ms
	W0828 10:50:14.881821    6125 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:50:14.881846    6125 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:50:14.881850    6125 start.go:729] Will try again in 5 seconds ...
	I0828 10:50:19.882285    6125 start.go:360] acquireMachinesLock for no-preload-178000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:50:19.882737    6125 start.go:364] duration metric: took 328.708µs to acquireMachinesLock for "no-preload-178000"
	I0828 10:50:19.882851    6125 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:50:19.882871    6125 fix.go:54] fixHost starting: 
	I0828 10:50:19.883612    6125 fix.go:112] recreateIfNeeded on no-preload-178000: state=Stopped err=<nil>
	W0828 10:50:19.883641    6125 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:50:19.902185    6125 out.go:177] * Restarting existing qemu2 VM for "no-preload-178000" ...
	I0828 10:50:19.907034    6125 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:50:19.907203    6125 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:34:61:da:6a:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/no-preload-178000/disk.qcow2
	I0828 10:50:19.916364    6125 main.go:141] libmachine: STDOUT: 
	I0828 10:50:19.916495    6125 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:50:19.916606    6125 fix.go:56] duration metric: took 33.739208ms for fixHost
	I0828 10:50:19.916632    6125 start.go:83] releasing machines lock for "no-preload-178000", held for 33.871ms
	W0828 10:50:19.916833    6125 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-178000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-178000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:50:19.926002    6125 out.go:201] 
	W0828 10:50:19.929084    6125 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:50:19.929110    6125 out.go:270] * 
	* 
	W0828 10:50:19.931802    6125 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:50:19.941007    6125 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-178000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-178000 -n no-preload-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-178000 -n no-preload-178000: exit status 7 (67.599583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-555000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-555000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (10.016662333s)

                                                
                                                
-- stdout --
	* [embed-certs-555000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-555000" primary control-plane node in "embed-certs-555000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-555000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:50:17.276614    6135 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:50:17.276760    6135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:17.276763    6135 out.go:358] Setting ErrFile to fd 2...
	I0828 10:50:17.276766    6135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:17.276872    6135 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:50:17.277917    6135 out.go:352] Setting JSON to false
	I0828 10:50:17.293838    6135 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4781,"bootTime":1724862636,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:50:17.293909    6135 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:50:17.298424    6135 out.go:177] * [embed-certs-555000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:50:17.306359    6135 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:50:17.306428    6135 notify.go:220] Checking for updates...
	I0828 10:50:17.313496    6135 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:50:17.316338    6135 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:50:17.319444    6135 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:50:17.322434    6135 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:50:17.323977    6135 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:50:17.327746    6135 config.go:182] Loaded profile config "multinode-223000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:50:17.327816    6135 config.go:182] Loaded profile config "no-preload-178000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:50:17.327867    6135 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:50:17.332454    6135 out.go:177] * Using the qemu2 driver based on user configuration
	I0828 10:50:17.338399    6135 start.go:297] selected driver: qemu2
	I0828 10:50:17.338406    6135 start.go:901] validating driver "qemu2" against <nil>
	I0828 10:50:17.338418    6135 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:50:17.340546    6135 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 10:50:17.343484    6135 out.go:177] * Automatically selected the socket_vmnet network
	I0828 10:50:17.346595    6135 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:50:17.346619    6135 cni.go:84] Creating CNI manager for ""
	I0828 10:50:17.346629    6135 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:50:17.346639    6135 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 10:50:17.346666    6135 start.go:340] cluster config:
	{Name:embed-certs-555000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-555000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:50:17.350272    6135 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:50:17.357463    6135 out.go:177] * Starting "embed-certs-555000" primary control-plane node in "embed-certs-555000" cluster
	I0828 10:50:17.361349    6135 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:50:17.361364    6135 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:50:17.361373    6135 cache.go:56] Caching tarball of preloaded images
	I0828 10:50:17.361431    6135 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:50:17.361438    6135 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:50:17.361507    6135 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/embed-certs-555000/config.json ...
	I0828 10:50:17.361519    6135 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/embed-certs-555000/config.json: {Name:mk0559e223c794bdce6271d29574f115f9d4c8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:50:17.361978    6135 start.go:360] acquireMachinesLock for embed-certs-555000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:50:17.362011    6135 start.go:364] duration metric: took 27.791µs to acquireMachinesLock for "embed-certs-555000"
	I0828 10:50:17.362023    6135 start.go:93] Provisioning new machine with config: &{Name:embed-certs-555000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-555000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:50:17.362050    6135 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:50:17.370406    6135 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 10:50:17.388485    6135 start.go:159] libmachine.API.Create for "embed-certs-555000" (driver="qemu2")
	I0828 10:50:17.388517    6135 client.go:168] LocalClient.Create starting
	I0828 10:50:17.388578    6135 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:50:17.388608    6135 main.go:141] libmachine: Decoding PEM data...
	I0828 10:50:17.388617    6135 main.go:141] libmachine: Parsing certificate...
	I0828 10:50:17.388660    6135 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:50:17.388686    6135 main.go:141] libmachine: Decoding PEM data...
	I0828 10:50:17.388695    6135 main.go:141] libmachine: Parsing certificate...
	I0828 10:50:17.389259    6135 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:50:17.567538    6135 main.go:141] libmachine: Creating SSH key...
	I0828 10:50:17.630722    6135 main.go:141] libmachine: Creating Disk image...
	I0828 10:50:17.630727    6135 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:50:17.630904    6135 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/disk.qcow2
	I0828 10:50:17.639969    6135 main.go:141] libmachine: STDOUT: 
	I0828 10:50:17.639988    6135 main.go:141] libmachine: STDERR: 
	I0828 10:50:17.640047    6135 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/disk.qcow2 +20000M
	I0828 10:50:17.647933    6135 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:50:17.647953    6135 main.go:141] libmachine: STDERR: 
	I0828 10:50:17.647968    6135 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/disk.qcow2
	I0828 10:50:17.647974    6135 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:50:17.647988    6135 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:50:17.648026    6135 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:24:fb:32:4e:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/disk.qcow2
	I0828 10:50:17.649605    6135 main.go:141] libmachine: STDOUT: 
	I0828 10:50:17.649620    6135 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:50:17.649638    6135 client.go:171] duration metric: took 261.124792ms to LocalClient.Create
	I0828 10:50:19.651780    6135 start.go:128] duration metric: took 2.28978225s to createHost
	I0828 10:50:19.651831    6135 start.go:83] releasing machines lock for "embed-certs-555000", held for 2.28988825s
	W0828 10:50:19.651904    6135 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:50:19.670095    6135 out.go:177] * Deleting "embed-certs-555000" in qemu2 ...
	W0828 10:50:19.701402    6135 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:50:19.701424    6135 start.go:729] Will try again in 5 seconds ...
	I0828 10:50:24.703507    6135 start.go:360] acquireMachinesLock for embed-certs-555000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:50:24.704035    6135 start.go:364] duration metric: took 401.125µs to acquireMachinesLock for "embed-certs-555000"
	I0828 10:50:24.704170    6135 start.go:93] Provisioning new machine with config: &{Name:embed-certs-555000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-555000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:50:24.704616    6135 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:50:24.713259    6135 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 10:50:24.767001    6135 start.go:159] libmachine.API.Create for "embed-certs-555000" (driver="qemu2")
	I0828 10:50:24.767075    6135 client.go:168] LocalClient.Create starting
	I0828 10:50:24.767178    6135 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:50:24.767243    6135 main.go:141] libmachine: Decoding PEM data...
	I0828 10:50:24.767263    6135 main.go:141] libmachine: Parsing certificate...
	I0828 10:50:24.767321    6135 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:50:24.767365    6135 main.go:141] libmachine: Decoding PEM data...
	I0828 10:50:24.767380    6135 main.go:141] libmachine: Parsing certificate...
	I0828 10:50:24.767979    6135 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:50:24.969352    6135 main.go:141] libmachine: Creating SSH key...
	I0828 10:50:25.197498    6135 main.go:141] libmachine: Creating Disk image...
	I0828 10:50:25.197509    6135 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:50:25.197725    6135 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/disk.qcow2
	I0828 10:50:25.207177    6135 main.go:141] libmachine: STDOUT: 
	I0828 10:50:25.207203    6135 main.go:141] libmachine: STDERR: 
	I0828 10:50:25.207261    6135 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/disk.qcow2 +20000M
	I0828 10:50:25.215224    6135 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:50:25.215239    6135 main.go:141] libmachine: STDERR: 
	I0828 10:50:25.215264    6135 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/disk.qcow2
	I0828 10:50:25.215270    6135 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:50:25.215280    6135 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:50:25.215315    6135 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:c4:13:1e:ff:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/disk.qcow2
	I0828 10:50:25.216903    6135 main.go:141] libmachine: STDOUT: 
	I0828 10:50:25.216917    6135 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:50:25.216931    6135 client.go:171] duration metric: took 449.865833ms to LocalClient.Create
	I0828 10:50:27.219064    6135 start.go:128] duration metric: took 2.514489s to createHost
	I0828 10:50:27.219144    6135 start.go:83] releasing machines lock for "embed-certs-555000", held for 2.515169416s
	W0828 10:50:27.219570    6135 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-555000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-555000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:50:27.229220    6135 out.go:201] 
	W0828 10:50:27.237357    6135 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:50:27.237382    6135 out.go:270] * 
	* 
	W0828 10:50:27.240039    6135 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:50:27.250252    6135 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-555000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-555000 -n embed-certs-555000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-555000 -n embed-certs-555000: exit status 7 (65.256208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-555000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-178000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-178000 -n no-preload-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-178000 -n no-preload-178000: exit status 7 (32.798833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-178000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-178000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-178000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.678959ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-178000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-178000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-178000 -n no-preload-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-178000 -n no-preload-178000: exit status 7 (28.440208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-178000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-178000 -n no-preload-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-178000 -n no-preload-178000: exit status 7 (28.765792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-178000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-178000 --alsologtostderr -v=1: exit status 83 (39.7ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-178000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-178000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:50:20.209178    6157 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:50:20.209328    6157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:20.209331    6157 out.go:358] Setting ErrFile to fd 2...
	I0828 10:50:20.209334    6157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:20.209465    6157 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:50:20.209687    6157 out.go:352] Setting JSON to false
	I0828 10:50:20.209695    6157 mustload.go:65] Loading cluster: no-preload-178000
	I0828 10:50:20.209908    6157 config.go:182] Loaded profile config "no-preload-178000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:50:20.214657    6157 out.go:177] * The control-plane node no-preload-178000 host is not running: state=Stopped
	I0828 10:50:20.217659    6157 out.go:177]   To start a cluster, run: "minikube start -p no-preload-178000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-178000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-178000 -n no-preload-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-178000 -n no-preload-178000: exit status 7 (29.146ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-178000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-178000 -n no-preload-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-178000 -n no-preload-178000: exit status 7 (28.770667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-713000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-713000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.819869083s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-713000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-713000" primary control-plane node in "default-k8s-diff-port-713000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-713000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:50:20.630348    6181 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:50:20.630471    6181 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:20.630475    6181 out.go:358] Setting ErrFile to fd 2...
	I0828 10:50:20.630477    6181 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:20.630590    6181 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:50:20.631737    6181 out.go:352] Setting JSON to false
	I0828 10:50:20.647826    6181 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4784,"bootTime":1724862636,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:50:20.647902    6181 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:50:20.652602    6181 out.go:177] * [default-k8s-diff-port-713000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:50:20.664683    6181 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:50:20.664739    6181 notify.go:220] Checking for updates...
	I0828 10:50:20.671551    6181 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:50:20.675682    6181 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:50:20.678673    6181 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:50:20.681587    6181 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:50:20.684647    6181 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:50:20.687961    6181 config.go:182] Loaded profile config "embed-certs-555000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:50:20.688027    6181 config.go:182] Loaded profile config "multinode-223000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:50:20.688081    6181 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:50:20.691584    6181 out.go:177] * Using the qemu2 driver based on user configuration
	I0828 10:50:20.698654    6181 start.go:297] selected driver: qemu2
	I0828 10:50:20.698659    6181 start.go:901] validating driver "qemu2" against <nil>
	I0828 10:50:20.698665    6181 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:50:20.701091    6181 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 10:50:20.704653    6181 out.go:177] * Automatically selected the socket_vmnet network
	I0828 10:50:20.707629    6181 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:50:20.707662    6181 cni.go:84] Creating CNI manager for ""
	I0828 10:50:20.707669    6181 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:50:20.707673    6181 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 10:50:20.707698    6181 start.go:340] cluster config:
	{Name:default-k8s-diff-port-713000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-713000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:50:20.711476    6181 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:50:20.719632    6181 out.go:177] * Starting "default-k8s-diff-port-713000" primary control-plane node in "default-k8s-diff-port-713000" cluster
	I0828 10:50:20.723624    6181 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:50:20.723639    6181 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:50:20.723649    6181 cache.go:56] Caching tarball of preloaded images
	I0828 10:50:20.723719    6181 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:50:20.723726    6181 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:50:20.723804    6181 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/default-k8s-diff-port-713000/config.json ...
	I0828 10:50:20.723817    6181 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/default-k8s-diff-port-713000/config.json: {Name:mk1e4addca778092603e414d09f29efd5a5c1d03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:50:20.724055    6181 start.go:360] acquireMachinesLock for default-k8s-diff-port-713000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:50:20.724093    6181 start.go:364] duration metric: took 29.625µs to acquireMachinesLock for "default-k8s-diff-port-713000"
	I0828 10:50:20.724106    6181 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-713000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-713000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:50:20.724147    6181 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:50:20.732682    6181 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 10:50:20.751135    6181 start.go:159] libmachine.API.Create for "default-k8s-diff-port-713000" (driver="qemu2")
	I0828 10:50:20.751166    6181 client.go:168] LocalClient.Create starting
	I0828 10:50:20.751240    6181 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:50:20.751281    6181 main.go:141] libmachine: Decoding PEM data...
	I0828 10:50:20.751291    6181 main.go:141] libmachine: Parsing certificate...
	I0828 10:50:20.751337    6181 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:50:20.751362    6181 main.go:141] libmachine: Decoding PEM data...
	I0828 10:50:20.751370    6181 main.go:141] libmachine: Parsing certificate...
	I0828 10:50:20.751743    6181 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:50:20.913085    6181 main.go:141] libmachine: Creating SSH key...
	I0828 10:50:20.967703    6181 main.go:141] libmachine: Creating Disk image...
	I0828 10:50:20.967708    6181 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:50:20.967879    6181 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/disk.qcow2
	I0828 10:50:20.976940    6181 main.go:141] libmachine: STDOUT: 
	I0828 10:50:20.976960    6181 main.go:141] libmachine: STDERR: 
	I0828 10:50:20.977005    6181 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/disk.qcow2 +20000M
	I0828 10:50:20.984943    6181 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:50:20.984958    6181 main.go:141] libmachine: STDERR: 
	I0828 10:50:20.984977    6181 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/disk.qcow2
	I0828 10:50:20.984983    6181 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:50:20.984997    6181 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:50:20.985021    6181 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:01:d0:25:ce:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/disk.qcow2
	I0828 10:50:20.986618    6181 main.go:141] libmachine: STDOUT: 
	I0828 10:50:20.986634    6181 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:50:20.986656    6181 client.go:171] duration metric: took 235.494041ms to LocalClient.Create
	I0828 10:50:22.988779    6181 start.go:128] duration metric: took 2.264686083s to createHost
	I0828 10:50:22.988842    6181 start.go:83] releasing machines lock for "default-k8s-diff-port-713000", held for 2.264816333s
	W0828 10:50:22.988928    6181 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:50:22.999396    6181 out.go:177] * Deleting "default-k8s-diff-port-713000" in qemu2 ...
	W0828 10:50:23.035692    6181 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:50:23.035730    6181 start.go:729] Will try again in 5 seconds ...
	I0828 10:50:28.037665    6181 start.go:360] acquireMachinesLock for default-k8s-diff-port-713000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:50:28.037917    6181 start.go:364] duration metric: took 202.458µs to acquireMachinesLock for "default-k8s-diff-port-713000"
	I0828 10:50:28.038044    6181 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-713000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-713000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:50:28.038215    6181 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:50:28.045563    6181 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 10:50:28.092703    6181 start.go:159] libmachine.API.Create for "default-k8s-diff-port-713000" (driver="qemu2")
	I0828 10:50:28.092765    6181 client.go:168] LocalClient.Create starting
	I0828 10:50:28.092874    6181 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:50:28.092975    6181 main.go:141] libmachine: Decoding PEM data...
	I0828 10:50:28.092997    6181 main.go:141] libmachine: Parsing certificate...
	I0828 10:50:28.093058    6181 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:50:28.093093    6181 main.go:141] libmachine: Decoding PEM data...
	I0828 10:50:28.093113    6181 main.go:141] libmachine: Parsing certificate...
	I0828 10:50:28.093691    6181 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:50:28.282464    6181 main.go:141] libmachine: Creating SSH key...
	I0828 10:50:28.354317    6181 main.go:141] libmachine: Creating Disk image...
	I0828 10:50:28.354326    6181 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:50:28.354541    6181 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/disk.qcow2
	I0828 10:50:28.363792    6181 main.go:141] libmachine: STDOUT: 
	I0828 10:50:28.363810    6181 main.go:141] libmachine: STDERR: 
	I0828 10:50:28.363870    6181 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/disk.qcow2 +20000M
	I0828 10:50:28.371723    6181 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:50:28.371739    6181 main.go:141] libmachine: STDERR: 
	I0828 10:50:28.371753    6181 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/disk.qcow2
	I0828 10:50:28.371763    6181 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:50:28.371776    6181 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:50:28.371801    6181 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:c5:89:9a:fc:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/disk.qcow2
	I0828 10:50:28.373433    6181 main.go:141] libmachine: STDOUT: 
	I0828 10:50:28.373452    6181 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:50:28.373466    6181 client.go:171] duration metric: took 280.703792ms to LocalClient.Create
	I0828 10:50:30.375635    6181 start.go:128] duration metric: took 2.337470834s to createHost
	I0828 10:50:30.375683    6181 start.go:83] releasing machines lock for "default-k8s-diff-port-713000", held for 2.337820083s
	W0828 10:50:30.376037    6181 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-713000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-713000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:50:30.384633    6181 out.go:201] 
	W0828 10:50:30.394786    6181 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:50:30.394826    6181 out.go:270] * 
	* 
	W0828 10:50:30.397564    6181 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:50:30.407710    6181 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-713000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-713000 -n default-k8s-diff-port-713000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-713000 -n default-k8s-diff-port-713000: exit status 7 (64.827875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-713000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-555000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-555000 create -f testdata/busybox.yaml: exit status 1 (29.947625ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-555000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-555000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-555000 -n embed-certs-555000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-555000 -n embed-certs-555000: exit status 7 (28.981625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-555000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-555000 -n embed-certs-555000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-555000 -n embed-certs-555000: exit status 7 (28.387875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-555000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-555000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-555000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-555000 describe deploy/metrics-server -n kube-system: exit status 1 (26.765208ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-555000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-555000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-555000 -n embed-certs-555000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-555000 -n embed-certs-555000: exit status 7 (29.368792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-555000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-713000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-713000 create -f testdata/busybox.yaml: exit status 1 (29.164417ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-713000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-713000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-713000 -n default-k8s-diff-port-713000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-713000 -n default-k8s-diff-port-713000: exit status 7 (29.131542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-713000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-713000 -n default-k8s-diff-port-713000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-713000 -n default-k8s-diff-port-713000: exit status 7 (28.66025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-713000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-713000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-713000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-713000 describe deploy/metrics-server -n kube-system: exit status 1 (26.8705ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-713000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-713000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-713000 -n default-k8s-diff-port-713000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-713000 -n default-k8s-diff-port-713000: exit status 7 (29.573375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-713000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-555000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-555000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.184459666s)

                                                
                                                
-- stdout --
	* [embed-certs-555000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-555000" primary control-plane node in "embed-certs-555000" cluster
	* Restarting existing qemu2 VM for "embed-certs-555000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-555000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:50:31.205564    6261 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:50:31.205701    6261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:31.205704    6261 out.go:358] Setting ErrFile to fd 2...
	I0828 10:50:31.205707    6261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:31.205846    6261 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:50:31.206891    6261 out.go:352] Setting JSON to false
	I0828 10:50:31.223036    6261 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4795,"bootTime":1724862636,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:50:31.223095    6261 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:50:31.227029    6261 out.go:177] * [embed-certs-555000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:50:31.233965    6261 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:50:31.233995    6261 notify.go:220] Checking for updates...
	I0828 10:50:31.241896    6261 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:50:31.244947    6261 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:50:31.247981    6261 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:50:31.250917    6261 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:50:31.253911    6261 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:50:31.257274    6261 config.go:182] Loaded profile config "embed-certs-555000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:50:31.257546    6261 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:50:31.260895    6261 out.go:177] * Using the qemu2 driver based on existing profile
	I0828 10:50:31.267963    6261 start.go:297] selected driver: qemu2
	I0828 10:50:31.267972    6261 start.go:901] validating driver "qemu2" against &{Name:embed-certs-555000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:embed-certs-555000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:50:31.268051    6261 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:50:31.270505    6261 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:50:31.270553    6261 cni.go:84] Creating CNI manager for ""
	I0828 10:50:31.270561    6261 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:50:31.270588    6261 start.go:340] cluster config:
	{Name:embed-certs-555000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-555000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:50:31.274340    6261 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:50:31.282914    6261 out.go:177] * Starting "embed-certs-555000" primary control-plane node in "embed-certs-555000" cluster
	I0828 10:50:31.285926    6261 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:50:31.285939    6261 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:50:31.285949    6261 cache.go:56] Caching tarball of preloaded images
	I0828 10:50:31.286009    6261 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:50:31.286014    6261 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:50:31.286070    6261 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/embed-certs-555000/config.json ...
	I0828 10:50:31.286583    6261 start.go:360] acquireMachinesLock for embed-certs-555000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:50:31.286611    6261 start.go:364] duration metric: took 22.375µs to acquireMachinesLock for "embed-certs-555000"
	I0828 10:50:31.286621    6261 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:50:31.286629    6261 fix.go:54] fixHost starting: 
	I0828 10:50:31.286750    6261 fix.go:112] recreateIfNeeded on embed-certs-555000: state=Stopped err=<nil>
	W0828 10:50:31.286759    6261 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:50:31.294723    6261 out.go:177] * Restarting existing qemu2 VM for "embed-certs-555000" ...
	I0828 10:50:31.298942    6261 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:50:31.298978    6261 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:c4:13:1e:ff:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/disk.qcow2
	I0828 10:50:31.301153    6261 main.go:141] libmachine: STDOUT: 
	I0828 10:50:31.301174    6261 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:50:31.301206    6261 fix.go:56] duration metric: took 14.578667ms for fixHost
	I0828 10:50:31.301211    6261 start.go:83] releasing machines lock for "embed-certs-555000", held for 14.595708ms
	W0828 10:50:31.301218    6261 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:50:31.301246    6261 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:50:31.301251    6261 start.go:729] Will try again in 5 seconds ...
	I0828 10:50:36.303362    6261 start.go:360] acquireMachinesLock for embed-certs-555000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:50:36.303814    6261 start.go:364] duration metric: took 354.209µs to acquireMachinesLock for "embed-certs-555000"
	I0828 10:50:36.303944    6261 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:50:36.303963    6261 fix.go:54] fixHost starting: 
	I0828 10:50:36.304693    6261 fix.go:112] recreateIfNeeded on embed-certs-555000: state=Stopped err=<nil>
	W0828 10:50:36.304731    6261 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:50:36.314250    6261 out.go:177] * Restarting existing qemu2 VM for "embed-certs-555000" ...
	I0828 10:50:36.317292    6261 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:50:36.317503    6261 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:c4:13:1e:ff:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/embed-certs-555000/disk.qcow2
	I0828 10:50:36.326612    6261 main.go:141] libmachine: STDOUT: 
	I0828 10:50:36.326678    6261 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:50:36.326746    6261 fix.go:56] duration metric: took 22.784958ms for fixHost
	I0828 10:50:36.326766    6261 start.go:83] releasing machines lock for "embed-certs-555000", held for 22.922667ms
	W0828 10:50:36.326927    6261 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-555000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-555000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:50:36.334241    6261 out.go:201] 
	W0828 10:50:36.338335    6261 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:50:36.338360    6261 out.go:270] * 
	* 
	W0828 10:50:36.340875    6261 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:50:36.348124    6261 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-555000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-555000 -n embed-certs-555000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-555000 -n embed-certs-555000: exit status 7 (66.510208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-555000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-713000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-713000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.196050292s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-713000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-713000" primary control-plane node in "default-k8s-diff-port-713000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-713000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-713000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:50:34.708406    6284 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:50:34.708549    6284 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:34.708553    6284 out.go:358] Setting ErrFile to fd 2...
	I0828 10:50:34.708555    6284 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:34.708688    6284 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:50:34.709661    6284 out.go:352] Setting JSON to false
	I0828 10:50:34.725569    6284 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4798,"bootTime":1724862636,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:50:34.725640    6284 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:50:34.730379    6284 out.go:177] * [default-k8s-diff-port-713000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:50:34.737491    6284 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:50:34.737555    6284 notify.go:220] Checking for updates...
	I0828 10:50:34.743532    6284 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:50:34.746456    6284 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:50:34.747930    6284 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:50:34.750510    6284 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:50:34.753464    6284 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:50:34.756788    6284 config.go:182] Loaded profile config "default-k8s-diff-port-713000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:50:34.757114    6284 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:50:34.760418    6284 out.go:177] * Using the qemu2 driver based on existing profile
	I0828 10:50:34.767449    6284 start.go:297] selected driver: qemu2
	I0828 10:50:34.767472    6284 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-713000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-713000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:50:34.767519    6284 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:50:34.769532    6284 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 10:50:34.769556    6284 cni.go:84] Creating CNI manager for ""
	I0828 10:50:34.769563    6284 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:50:34.769582    6284 start.go:340] cluster config:
	{Name:default-k8s-diff-port-713000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-713000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:50:34.772710    6284 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:50:34.781441    6284 out.go:177] * Starting "default-k8s-diff-port-713000" primary control-plane node in "default-k8s-diff-port-713000" cluster
	I0828 10:50:34.788513    6284 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:50:34.788528    6284 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:50:34.788537    6284 cache.go:56] Caching tarball of preloaded images
	I0828 10:50:34.788597    6284 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:50:34.788603    6284 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:50:34.788669    6284 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/default-k8s-diff-port-713000/config.json ...
	I0828 10:50:34.789132    6284 start.go:360] acquireMachinesLock for default-k8s-diff-port-713000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:50:34.789157    6284 start.go:364] duration metric: took 19.542µs to acquireMachinesLock for "default-k8s-diff-port-713000"
	I0828 10:50:34.789166    6284 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:50:34.789171    6284 fix.go:54] fixHost starting: 
	I0828 10:50:34.789281    6284 fix.go:112] recreateIfNeeded on default-k8s-diff-port-713000: state=Stopped err=<nil>
	W0828 10:50:34.789289    6284 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:50:34.793479    6284 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-713000" ...
	I0828 10:50:34.801438    6284 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:50:34.801468    6284 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:c5:89:9a:fc:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/disk.qcow2
	I0828 10:50:34.803260    6284 main.go:141] libmachine: STDOUT: 
	I0828 10:50:34.803277    6284 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:50:34.803303    6284 fix.go:56] duration metric: took 14.132541ms for fixHost
	I0828 10:50:34.803306    6284 start.go:83] releasing machines lock for "default-k8s-diff-port-713000", held for 14.145958ms
	W0828 10:50:34.803313    6284 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:50:34.803344    6284 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:50:34.803348    6284 start.go:729] Will try again in 5 seconds ...
	I0828 10:50:39.805434    6284 start.go:360] acquireMachinesLock for default-k8s-diff-port-713000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:50:39.805924    6284 start.go:364] duration metric: took 363.458µs to acquireMachinesLock for "default-k8s-diff-port-713000"
	I0828 10:50:39.806051    6284 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:50:39.806071    6284 fix.go:54] fixHost starting: 
	I0828 10:50:39.806856    6284 fix.go:112] recreateIfNeeded on default-k8s-diff-port-713000: state=Stopped err=<nil>
	W0828 10:50:39.806884    6284 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:50:39.827514    6284 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-713000" ...
	I0828 10:50:39.831227    6284 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:50:39.831429    6284 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:c5:89:9a:fc:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/default-k8s-diff-port-713000/disk.qcow2
	I0828 10:50:39.840738    6284 main.go:141] libmachine: STDOUT: 
	I0828 10:50:39.840820    6284 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:50:39.840922    6284 fix.go:56] duration metric: took 34.849875ms for fixHost
	I0828 10:50:39.840944    6284 start.go:83] releasing machines lock for "default-k8s-diff-port-713000", held for 34.995541ms
	W0828 10:50:39.841143    6284 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-713000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-713000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:50:39.849346    6284 out.go:201] 
	W0828 10:50:39.850901    6284 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:50:39.850954    6284 out.go:270] * 
	* 
	W0828 10:50:39.853724    6284 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:50:39.863303    6284 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-713000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-713000 -n default-k8s-diff-port-713000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-713000 -n default-k8s-diff-port-713000: exit status 7 (66.2905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-713000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-555000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-555000 -n embed-certs-555000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-555000 -n embed-certs-555000: exit status 7 (31.694917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-555000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-555000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-555000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-555000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.57325ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-555000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-555000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-555000 -n embed-certs-555000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-555000 -n embed-certs-555000: exit status 7 (28.786084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-555000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-555000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-555000 -n embed-certs-555000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-555000 -n embed-certs-555000: exit status 7 (29.115875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-555000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-555000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-555000 --alsologtostderr -v=1: exit status 83 (38.614292ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-555000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-555000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:50:36.615540    6303 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:50:36.615898    6303 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:36.615902    6303 out.go:358] Setting ErrFile to fd 2...
	I0828 10:50:36.615905    6303 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:36.616090    6303 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:50:36.616342    6303 out.go:352] Setting JSON to false
	I0828 10:50:36.616354    6303 mustload.go:65] Loading cluster: embed-certs-555000
	I0828 10:50:36.616684    6303 config.go:182] Loaded profile config "embed-certs-555000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:50:36.620357    6303 out.go:177] * The control-plane node embed-certs-555000 host is not running: state=Stopped
	I0828 10:50:36.624341    6303 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-555000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-555000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-555000 -n embed-certs-555000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-555000 -n embed-certs-555000: exit status 7 (29.37575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-555000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-555000 -n embed-certs-555000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-555000 -n embed-certs-555000: exit status 7 (29.015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-555000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-413000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-413000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (10.04248225s)

                                                
                                                
-- stdout --
	* [newest-cni-413000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-413000" primary control-plane node in "newest-cni-413000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-413000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:50:36.931988    6320 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:50:36.932122    6320 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:36.932125    6320 out.go:358] Setting ErrFile to fd 2...
	I0828 10:50:36.932128    6320 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:36.932259    6320 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:50:36.933301    6320 out.go:352] Setting JSON to false
	I0828 10:50:36.949422    6320 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4800,"bootTime":1724862636,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:50:36.949495    6320 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:50:36.954396    6320 out.go:177] * [newest-cni-413000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:50:36.960245    6320 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:50:36.960308    6320 notify.go:220] Checking for updates...
	I0828 10:50:36.967361    6320 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:50:36.970324    6320 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:50:36.973320    6320 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:50:36.976361    6320 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:50:36.979326    6320 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:50:36.982633    6320 config.go:182] Loaded profile config "default-k8s-diff-port-713000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:50:36.982695    6320 config.go:182] Loaded profile config "multinode-223000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:50:36.982743    6320 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:50:36.987399    6320 out.go:177] * Using the qemu2 driver based on user configuration
	I0828 10:50:36.994305    6320 start.go:297] selected driver: qemu2
	I0828 10:50:36.994313    6320 start.go:901] validating driver "qemu2" against <nil>
	I0828 10:50:36.994320    6320 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:50:36.996626    6320 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0828 10:50:36.996653    6320 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0828 10:50:37.001357    6320 out.go:177] * Automatically selected the socket_vmnet network
	I0828 10:50:37.008439    6320 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0828 10:50:37.008471    6320 cni.go:84] Creating CNI manager for ""
	I0828 10:50:37.008479    6320 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:50:37.008483    6320 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 10:50:37.008520    6320 start.go:340] cluster config:
	{Name:newest-cni-413000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:50:37.012298    6320 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:50:37.021326    6320 out.go:177] * Starting "newest-cni-413000" primary control-plane node in "newest-cni-413000" cluster
	I0828 10:50:37.025303    6320 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:50:37.025319    6320 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:50:37.025335    6320 cache.go:56] Caching tarball of preloaded images
	I0828 10:50:37.025407    6320 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:50:37.025421    6320 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:50:37.025499    6320 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/newest-cni-413000/config.json ...
	I0828 10:50:37.025512    6320 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/newest-cni-413000/config.json: {Name:mk9cb3d0f661b4491d56c09d800d1dbe71af5a9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 10:50:37.025756    6320 start.go:360] acquireMachinesLock for newest-cni-413000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:50:37.025792    6320 start.go:364] duration metric: took 30.417µs to acquireMachinesLock for "newest-cni-413000"
	I0828 10:50:37.025805    6320 start.go:93] Provisioning new machine with config: &{Name:newest-cni-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:50:37.025841    6320 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:50:37.034344    6320 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 10:50:37.052728    6320 start.go:159] libmachine.API.Create for "newest-cni-413000" (driver="qemu2")
	I0828 10:50:37.052753    6320 client.go:168] LocalClient.Create starting
	I0828 10:50:37.052820    6320 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:50:37.052852    6320 main.go:141] libmachine: Decoding PEM data...
	I0828 10:50:37.052861    6320 main.go:141] libmachine: Parsing certificate...
	I0828 10:50:37.052903    6320 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:50:37.052927    6320 main.go:141] libmachine: Decoding PEM data...
	I0828 10:50:37.052935    6320 main.go:141] libmachine: Parsing certificate...
	I0828 10:50:37.053434    6320 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:50:37.215747    6320 main.go:141] libmachine: Creating SSH key...
	I0828 10:50:37.382032    6320 main.go:141] libmachine: Creating Disk image...
	I0828 10:50:37.382043    6320 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:50:37.382253    6320 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/disk.qcow2
	I0828 10:50:37.392020    6320 main.go:141] libmachine: STDOUT: 
	I0828 10:50:37.392040    6320 main.go:141] libmachine: STDERR: 
	I0828 10:50:37.392088    6320 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/disk.qcow2 +20000M
	I0828 10:50:37.399925    6320 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:50:37.399941    6320 main.go:141] libmachine: STDERR: 
	I0828 10:50:37.399952    6320 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/disk.qcow2
	I0828 10:50:37.399958    6320 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:50:37.399971    6320 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:50:37.400007    6320 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:db:2f:30:7c:01 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/disk.qcow2
	I0828 10:50:37.401625    6320 main.go:141] libmachine: STDOUT: 
	I0828 10:50:37.401640    6320 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:50:37.401657    6320 client.go:171] duration metric: took 348.910333ms to LocalClient.Create
	I0828 10:50:39.403765    6320 start.go:128] duration metric: took 2.37798375s to createHost
	I0828 10:50:39.403823    6320 start.go:83] releasing machines lock for "newest-cni-413000", held for 2.378101167s
	W0828 10:50:39.403880    6320 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:50:39.420157    6320 out.go:177] * Deleting "newest-cni-413000" in qemu2 ...
	W0828 10:50:39.452071    6320 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:50:39.452094    6320 start.go:729] Will try again in 5 seconds ...
	I0828 10:50:44.453293    6320 start.go:360] acquireMachinesLock for newest-cni-413000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:50:44.453771    6320 start.go:364] duration metric: took 379.583µs to acquireMachinesLock for "newest-cni-413000"
	I0828 10:50:44.453898    6320 start.go:93] Provisioning new machine with config: &{Name:newest-cni-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 10:50:44.454219    6320 start.go:125] createHost starting for "" (driver="qemu2")
	I0828 10:50:44.458745    6320 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 10:50:44.509028    6320 start.go:159] libmachine.API.Create for "newest-cni-413000" (driver="qemu2")
	I0828 10:50:44.509078    6320 client.go:168] LocalClient.Create starting
	I0828 10:50:44.509201    6320 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/ca.pem
	I0828 10:50:44.509275    6320 main.go:141] libmachine: Decoding PEM data...
	I0828 10:50:44.509291    6320 main.go:141] libmachine: Parsing certificate...
	I0828 10:50:44.509355    6320 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19529-1176/.minikube/certs/cert.pem
	I0828 10:50:44.509401    6320 main.go:141] libmachine: Decoding PEM data...
	I0828 10:50:44.509415    6320 main.go:141] libmachine: Parsing certificate...
	I0828 10:50:44.509974    6320 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0828 10:50:44.686252    6320 main.go:141] libmachine: Creating SSH key...
	I0828 10:50:44.881501    6320 main.go:141] libmachine: Creating Disk image...
	I0828 10:50:44.881508    6320 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0828 10:50:44.881723    6320 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/disk.qcow2
	I0828 10:50:44.891281    6320 main.go:141] libmachine: STDOUT: 
	I0828 10:50:44.891299    6320 main.go:141] libmachine: STDERR: 
	I0828 10:50:44.891351    6320 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/disk.qcow2 +20000M
	I0828 10:50:44.899370    6320 main.go:141] libmachine: STDOUT: Image resized.
	
	I0828 10:50:44.899384    6320 main.go:141] libmachine: STDERR: 
	I0828 10:50:44.899395    6320 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/disk.qcow2
	I0828 10:50:44.899398    6320 main.go:141] libmachine: Starting QEMU VM...
	I0828 10:50:44.899411    6320 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:50:44.899438    6320 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:03:6c:43:09:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/disk.qcow2
	I0828 10:50:44.901086    6320 main.go:141] libmachine: STDOUT: 
	I0828 10:50:44.901103    6320 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:50:44.901120    6320 client.go:171] duration metric: took 392.050709ms to LocalClient.Create
	I0828 10:50:46.903228    6320 start.go:128] duration metric: took 2.44905975s to createHost
	I0828 10:50:46.903324    6320 start.go:83] releasing machines lock for "newest-cni-413000", held for 2.4495855s
	W0828 10:50:46.903649    6320 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:50:46.913297    6320 out.go:201] 
	W0828 10:50:46.921346    6320 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:50:46.921365    6320 out.go:270] * 
	* 
	W0828 10:50:46.923266    6320 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:50:46.936298    6320 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-413000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-413000 -n newest-cni-413000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-413000 -n newest-cni-413000: exit status 7 (69.833292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-413000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-713000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-713000 -n default-k8s-diff-port-713000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-713000 -n default-k8s-diff-port-713000: exit status 7 (32.735458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-713000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-713000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-713000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-713000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.436709ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-713000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-713000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-713000 -n default-k8s-diff-port-713000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-713000 -n default-k8s-diff-port-713000: exit status 7 (28.528ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-713000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-713000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-713000 -n default-k8s-diff-port-713000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-713000 -n default-k8s-diff-port-713000: exit status 7 (28.787042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-713000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-713000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-713000 --alsologtostderr -v=1: exit status 83 (42.654041ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-713000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-713000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:50:40.132209    6342 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:50:40.132381    6342 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:40.132385    6342 out.go:358] Setting ErrFile to fd 2...
	I0828 10:50:40.132387    6342 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:40.132525    6342 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:50:40.132734    6342 out.go:352] Setting JSON to false
	I0828 10:50:40.132741    6342 mustload.go:65] Loading cluster: default-k8s-diff-port-713000
	I0828 10:50:40.132937    6342 config.go:182] Loaded profile config "default-k8s-diff-port-713000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:50:40.137341    6342 out.go:177] * The control-plane node default-k8s-diff-port-713000 host is not running: state=Stopped
	I0828 10:50:40.141322    6342 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-713000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-713000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-713000 -n default-k8s-diff-port-713000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-713000 -n default-k8s-diff-port-713000: exit status 7 (28.8365ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-713000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-713000 -n default-k8s-diff-port-713000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-713000 -n default-k8s-diff-port-713000: exit status 7 (29.286583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-713000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-413000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-413000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.190506625s)

                                                
                                                
-- stdout --
	* [newest-cni-413000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-413000" primary control-plane node in "newest-cni-413000" cluster
	* Restarting existing qemu2 VM for "newest-cni-413000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-413000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:50:50.767189    6392 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:50:50.767313    6392 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:50.767316    6392 out.go:358] Setting ErrFile to fd 2...
	I0828 10:50:50.767319    6392 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:50.767462    6392 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:50:50.768460    6392 out.go:352] Setting JSON to false
	I0828 10:50:50.784673    6392 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4814,"bootTime":1724862636,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:50:50.784738    6392 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:50:50.789905    6392 out.go:177] * [newest-cni-413000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:50:50.796769    6392 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:50:50.796830    6392 notify.go:220] Checking for updates...
	I0828 10:50:50.804913    6392 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:50:50.807925    6392 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:50:50.810937    6392 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:50:50.813935    6392 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:50:50.816886    6392 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:50:50.820140    6392 config.go:182] Loaded profile config "newest-cni-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:50:50.820393    6392 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:50:50.824948    6392 out.go:177] * Using the qemu2 driver based on existing profile
	I0828 10:50:50.831884    6392 start.go:297] selected driver: qemu2
	I0828 10:50:50.831890    6392 start.go:901] validating driver "qemu2" against &{Name:newest-cni-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:newest-cni-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:50:50.831941    6392 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:50:50.834350    6392 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0828 10:50:50.834389    6392 cni.go:84] Creating CNI manager for ""
	I0828 10:50:50.834400    6392 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 10:50:50.834434    6392 start.go:340] cluster config:
	{Name:newest-cni-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-413000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:50:50.838098    6392 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 10:50:50.846921    6392 out.go:177] * Starting "newest-cni-413000" primary control-plane node in "newest-cni-413000" cluster
	I0828 10:50:50.850883    6392 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 10:50:50.850900    6392 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 10:50:50.850908    6392 cache.go:56] Caching tarball of preloaded images
	I0828 10:50:50.850975    6392 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 10:50:50.850981    6392 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 10:50:50.851037    6392 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/newest-cni-413000/config.json ...
	I0828 10:50:50.851541    6392 start.go:360] acquireMachinesLock for newest-cni-413000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:50:50.851578    6392 start.go:364] duration metric: took 30.292µs to acquireMachinesLock for "newest-cni-413000"
	I0828 10:50:50.851589    6392 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:50:50.851597    6392 fix.go:54] fixHost starting: 
	I0828 10:50:50.851723    6392 fix.go:112] recreateIfNeeded on newest-cni-413000: state=Stopped err=<nil>
	W0828 10:50:50.851733    6392 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:50:50.855883    6392 out.go:177] * Restarting existing qemu2 VM for "newest-cni-413000" ...
	I0828 10:50:50.862794    6392 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:50:50.862825    6392 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:03:6c:43:09:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/disk.qcow2
	I0828 10:50:50.864951    6392 main.go:141] libmachine: STDOUT: 
	I0828 10:50:50.864971    6392 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:50:50.865003    6392 fix.go:56] duration metric: took 13.408334ms for fixHost
	I0828 10:50:50.865009    6392 start.go:83] releasing machines lock for "newest-cni-413000", held for 13.4265ms
	W0828 10:50:50.865016    6392 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:50:50.865052    6392 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:50:50.865057    6392 start.go:729] Will try again in 5 seconds ...
	I0828 10:50:55.867189    6392 start.go:360] acquireMachinesLock for newest-cni-413000: {Name:mkb3d658eeaa2cb372b91f750ba03bb8e3592dfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 10:50:55.867652    6392 start.go:364] duration metric: took 343.458µs to acquireMachinesLock for "newest-cni-413000"
	I0828 10:50:55.867801    6392 start.go:96] Skipping create...Using existing machine configuration
	I0828 10:50:55.867823    6392 fix.go:54] fixHost starting: 
	I0828 10:50:55.868634    6392 fix.go:112] recreateIfNeeded on newest-cni-413000: state=Stopped err=<nil>
	W0828 10:50:55.868661    6392 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 10:50:55.878115    6392 out.go:177] * Restarting existing qemu2 VM for "newest-cni-413000" ...
	I0828 10:50:55.882190    6392 qemu.go:418] Using hvf for hardware acceleration
	I0828 10:50:55.882397    6392 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:03:6c:43:09:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19529-1176/.minikube/machines/newest-cni-413000/disk.qcow2
	I0828 10:50:55.891851    6392 main.go:141] libmachine: STDOUT: 
	I0828 10:50:55.891928    6392 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0828 10:50:55.892045    6392 fix.go:56] duration metric: took 24.223ms for fixHost
	I0828 10:50:55.892073    6392 start.go:83] releasing machines lock for "newest-cni-413000", held for 24.395167ms
	W0828 10:50:55.892251    6392 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-413000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-413000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0828 10:50:55.900208    6392 out.go:201] 
	W0828 10:50:55.904174    6392 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0828 10:50:55.904215    6392 out.go:270] * 
	* 
	W0828 10:50:55.906884    6392 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 10:50:55.914953    6392 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-413000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-413000 -n newest-cni-413000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-413000 -n newest-cni-413000: exit status 7 (67.225417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-413000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-413000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-413000 -n newest-cni-413000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-413000 -n newest-cni-413000: exit status 7 (29.598292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-413000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-413000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-413000 --alsologtostderr -v=1: exit status 83 (42.675708ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-413000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-413000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:50:56.101501    6409 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:50:56.101656    6409 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:56.101659    6409 out.go:358] Setting ErrFile to fd 2...
	I0828 10:50:56.101662    6409 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:50:56.101780    6409 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:50:56.101992    6409 out.go:352] Setting JSON to false
	I0828 10:50:56.101999    6409 mustload.go:65] Loading cluster: newest-cni-413000
	I0828 10:50:56.102197    6409 config.go:182] Loaded profile config "newest-cni-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:50:56.106315    6409 out.go:177] * The control-plane node newest-cni-413000 host is not running: state=Stopped
	I0828 10:50:56.110320    6409 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-413000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-413000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-413000 -n newest-cni-413000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-413000 -n newest-cni-413000: exit status 7 (29.557583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-413000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-413000 -n newest-cni-413000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-413000 -n newest-cni-413000: exit status 7 (30.269084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-413000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (155/274)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.0/json-events 6.3
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.08
18 TestDownloadOnly/v1.31.0/DeleteAll 0.1
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.37
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 202.84
29 TestAddons/serial/Volcano 38.36
31 TestAddons/serial/GCPAuth/Namespaces 0.09
34 TestAddons/parallel/Ingress 18.48
35 TestAddons/parallel/InspektorGadget 10.31
36 TestAddons/parallel/MetricsServer 5.28
39 TestAddons/parallel/CSI 35.6
40 TestAddons/parallel/Headlamp 15.65
41 TestAddons/parallel/CloudSpanner 5.19
42 TestAddons/parallel/LocalPath 42.03
43 TestAddons/parallel/NvidiaDevicePlugin 5.18
44 TestAddons/parallel/Yakd 11.27
45 TestAddons/StoppedEnableDisable 9.38
53 TestHyperKitDriverInstallOrUpdate 11.14
56 TestErrorSpam/setup 34.66
57 TestErrorSpam/start 0.35
58 TestErrorSpam/status 0.24
59 TestErrorSpam/pause 0.65
60 TestErrorSpam/unpause 0.59
61 TestErrorSpam/stop 64.27
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 49.78
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 38.35
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.05
72 TestFunctional/serial/CacheCmd/cache/add_remote 5.16
73 TestFunctional/serial/CacheCmd/cache/add_local 1.17
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.17
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 0.83
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.02
81 TestFunctional/serial/ExtraConfig 33.6
82 TestFunctional/serial/ComponentHealth 0.05
83 TestFunctional/serial/LogsCmd 0.67
84 TestFunctional/serial/LogsFileCmd 0.59
85 TestFunctional/serial/InvalidService 3.57
87 TestFunctional/parallel/ConfigCmd 0.23
88 TestFunctional/parallel/DashboardCmd 12.03
89 TestFunctional/parallel/DryRun 0.32
90 TestFunctional/parallel/InternationalLanguage 0.13
91 TestFunctional/parallel/StatusCmd 0.24
96 TestFunctional/parallel/AddonsCmd 0.1
97 TestFunctional/parallel/PersistentVolumeClaim 26.68
99 TestFunctional/parallel/SSHCmd 0.13
100 TestFunctional/parallel/CpCmd 0.52
102 TestFunctional/parallel/FileSync 0.07
103 TestFunctional/parallel/CertSync 0.4
107 TestFunctional/parallel/NodeLabels 0.04
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.08
111 TestFunctional/parallel/License 0.21
112 TestFunctional/parallel/Version/short 0.04
113 TestFunctional/parallel/Version/components 0.27
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.1
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
118 TestFunctional/parallel/ImageCommands/ImageBuild 2.93
119 TestFunctional/parallel/ImageCommands/Setup 1.69
120 TestFunctional/parallel/DockerEnv/bash 0.32
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
124 TestFunctional/parallel/ServiceCmd/DeployApp 11.09
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.45
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.57
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.19
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.15
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.24
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.19
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.23
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.12
137 TestFunctional/parallel/ServiceCmd/List 0.13
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.08
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
140 TestFunctional/parallel/ServiceCmd/Format 0.09
141 TestFunctional/parallel/ServiceCmd/URL 0.1
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
146 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
149 TestFunctional/parallel/ProfileCmd/profile_list 0.12
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
151 TestFunctional/parallel/MountCmd/any-port 7.32
152 TestFunctional/parallel/MountCmd/specific-port 1.24
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.73
154 TestFunctional/delete_echo-server_images 0.05
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 206.53
161 TestMultiControlPlane/serial/DeployApp 6.18
162 TestMultiControlPlane/serial/PingHostFromPods 0.72
163 TestMultiControlPlane/serial/AddWorkerNode 51.78
164 TestMultiControlPlane/serial/NodeLabels 0.14
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.24
166 TestMultiControlPlane/serial/CopyFile 4.21
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 79.27
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 1.91
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.21
212 TestMainNoArgs 0.04
259 TestStoppedBinaryUpgrade/Setup 2.39
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
276 TestNoKubernetes/serial/ProfileList 31.45
277 TestNoKubernetes/serial/Stop 2.09
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
291 TestStoppedBinaryUpgrade/MinikubeLogs 0.62
294 TestStartStop/group/old-k8s-version/serial/Stop 2.13
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
305 TestStartStop/group/no-preload/serial/Stop 3.08
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
318 TestStartStop/group/embed-certs/serial/Stop 3.52
321 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.87
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
338 TestStartStop/group/newest-cni/serial/Stop 3.53
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-450000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-450000: exit status 85 (92.43125ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-450000 | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT |          |
	|         | -p download-only-450000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 09:50:28
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 09:50:28.294097    1680 out.go:345] Setting OutFile to fd 1 ...
	I0828 09:50:28.294253    1680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 09:50:28.294257    1680 out.go:358] Setting ErrFile to fd 2...
	I0828 09:50:28.294259    1680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 09:50:28.294393    1680 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	W0828 09:50:28.294501    1680 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19529-1176/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19529-1176/.minikube/config/config.json: no such file or directory
	I0828 09:50:28.295720    1680 out.go:352] Setting JSON to true
	I0828 09:50:28.313012    1680 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1191,"bootTime":1724862637,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 09:50:28.313148    1680 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 09:50:28.318688    1680 out.go:97] [download-only-450000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 09:50:28.318829    1680 notify.go:220] Checking for updates...
	W0828 09:50:28.318883    1680 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball: no such file or directory
	I0828 09:50:28.321645    1680 out.go:169] MINIKUBE_LOCATION=19529
	I0828 09:50:28.324621    1680 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 09:50:28.328672    1680 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 09:50:28.332650    1680 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 09:50:28.335611    1680 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	W0828 09:50:28.341595    1680 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0828 09:50:28.341795    1680 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 09:50:28.346612    1680 out.go:97] Using the qemu2 driver based on user configuration
	I0828 09:50:28.346630    1680 start.go:297] selected driver: qemu2
	I0828 09:50:28.346644    1680 start.go:901] validating driver "qemu2" against <nil>
	I0828 09:50:28.346703    1680 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 09:50:28.349600    1680 out.go:169] Automatically selected the socket_vmnet network
	I0828 09:50:28.355557    1680 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0828 09:50:28.355648    1680 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0828 09:50:28.355726    1680 cni.go:84] Creating CNI manager for ""
	I0828 09:50:28.355744    1680 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0828 09:50:28.355796    1680 start.go:340] cluster config:
	{Name:download-only-450000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-450000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 09:50:28.361270    1680 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 09:50:28.365632    1680 out.go:97] Downloading VM boot image ...
	I0828 09:50:28.365658    1680 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso
	I0828 09:50:32.921300    1680 out.go:97] Starting "download-only-450000" primary control-plane node in "download-only-450000" cluster
	I0828 09:50:32.921319    1680 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0828 09:50:32.983800    1680 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0828 09:50:32.983807    1680 cache.go:56] Caching tarball of preloaded images
	I0828 09:50:32.983957    1680 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0828 09:50:32.988049    1680 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0828 09:50:32.988056    1680 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0828 09:50:33.135766    1680 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0828 09:50:38.681550    1680 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0828 09:50:38.682013    1680 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0828 09:50:39.378310    1680 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0828 09:50:39.378511    1680 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/download-only-450000/config.json ...
	I0828 09:50:39.378527    1680 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/download-only-450000/config.json: {Name:mkc15e7cfaa589eed2dad8ecc4d6524e9169a8ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:50:39.378763    1680 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0828 09:50:39.378946    1680 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0828 09:50:39.909720    1680 out.go:193] 
	W0828 09:50:39.917719    1680 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19529-1176/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106f53920 0x106f53920 0x106f53920 0x106f53920 0x106f53920 0x106f53920 0x106f53920] Decompressors:map[bz2:0x140007077e0 gz:0x140007077e8 tar:0x140007076a0 tar.bz2:0x140007076b0 tar.gz:0x14000707700 tar.xz:0x14000707710 tar.zst:0x140007077c0 tbz2:0x140007076b0 tgz:0x14000707700 txz:0x14000707710 tzst:0x140007077c0 xz:0x140007077f0 zip:0x14000707ba0 zst:0x140007077f8] Getters:map[file:0x1400061b850 http:0x14000c18230 https:0x14000c18280] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0828 09:50:39.917742    1680 out_reason.go:110] 
	W0828 09:50:39.925767    1680 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 09:50:39.929627    1680 out.go:193] 
	
	
	* The control-plane node download-only-450000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-450000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-450000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (6.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-436000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-436000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 : (6.298515958s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (6.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-436000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-436000: exit status 85 (78.757708ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-450000 | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT |                     |
	|         | -p download-only-450000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT | 28 Aug 24 09:50 PDT |
	| delete  | -p download-only-450000        | download-only-450000 | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT | 28 Aug 24 09:50 PDT |
	| start   | -o=json --download-only        | download-only-436000 | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT |                     |
	|         | -p download-only-436000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 09:50:40
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 09:50:40.344268    1704 out.go:345] Setting OutFile to fd 1 ...
	I0828 09:50:40.344462    1704 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 09:50:40.344465    1704 out.go:358] Setting ErrFile to fd 2...
	I0828 09:50:40.344468    1704 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 09:50:40.344609    1704 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 09:50:40.345678    1704 out.go:352] Setting JSON to true
	I0828 09:50:40.361793    1704 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1203,"bootTime":1724862637,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 09:50:40.361921    1704 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 09:50:40.365667    1704 out.go:97] [download-only-436000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 09:50:40.365778    1704 notify.go:220] Checking for updates...
	I0828 09:50:40.369621    1704 out.go:169] MINIKUBE_LOCATION=19529
	I0828 09:50:40.372600    1704 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 09:50:40.375639    1704 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 09:50:40.378657    1704 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 09:50:40.381623    1704 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	W0828 09:50:40.387584    1704 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0828 09:50:40.387749    1704 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 09:50:40.390604    1704 out.go:97] Using the qemu2 driver based on user configuration
	I0828 09:50:40.390612    1704 start.go:297] selected driver: qemu2
	I0828 09:50:40.390614    1704 start.go:901] validating driver "qemu2" against <nil>
	I0828 09:50:40.390674    1704 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 09:50:40.393612    1704 out.go:169] Automatically selected the socket_vmnet network
	I0828 09:50:40.398768    1704 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0828 09:50:40.398849    1704 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0828 09:50:40.398869    1704 cni.go:84] Creating CNI manager for ""
	I0828 09:50:40.398876    1704 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 09:50:40.398882    1704 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 09:50:40.398926    1704 start.go:340] cluster config:
	{Name:download-only-436000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-436000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 09:50:40.402305    1704 iso.go:125] acquiring lock: {Name:mkdd8e8628868155844348d9fdde81b1e3776b00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 09:50:40.405639    1704 out.go:97] Starting "download-only-436000" primary control-plane node in "download-only-436000" cluster
	I0828 09:50:40.405647    1704 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 09:50:40.470904    1704 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 09:50:40.470918    1704 cache.go:56] Caching tarball of preloaded images
	I0828 09:50:40.471109    1704 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 09:50:40.475364    1704 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0828 09:50:40.475371    1704 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0828 09:50:40.629059    1704 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 09:50:44.577234    1704 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0828 09:50:44.577394    1704 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0828 09:50:45.102463    1704 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 09:50:45.102657    1704 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/download-only-436000/config.json ...
	I0828 09:50:45.102673    1704 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/download-only-436000/config.json: {Name:mk7fb37e893164d8def344c60d74a22a0b6d70a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:50:45.103023    1704 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 09:50:45.103150    1704 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19529-1176/.minikube/cache/darwin/arm64/v1.31.0/kubectl
	
	
	* The control-plane node download-only-436000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-436000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-436000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.37s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-378000 --alsologtostderr --binary-mirror http://127.0.0.1:49313 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-378000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-378000
--- PASS: TestBinaryMirror (0.37s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-793000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-793000: exit status 85 (55.179542ms)

                                                
                                                
-- stdout --
	* Profile "addons-793000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-793000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-793000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-793000: exit status 85 (59.174167ms)

                                                
                                                
-- stdout --
	* Profile "addons-793000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-793000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (202.84s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-793000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-793000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m22.836541833s)
--- PASS: TestAddons/Setup (202.84s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.36s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 7.707458ms
addons_test.go:905: volcano-admission stabilized in 7.735375ms
addons_test.go:913: volcano-controller stabilized in 7.759041ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-m9zsh" [30d62a47-9ca9-42cc-92ab-eb869198c43d] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004266125s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-5cbxk" [f9ba3df9-0b13-4482-ac31-8b84d7de7e8b] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004959334s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-jv7pw" [365181ff-5268-4186-88b7-338ac7f65532] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005188875s
addons_test.go:932: (dbg) Run:  kubectl --context addons-793000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-793000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-793000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [967da319-001f-438c-936d-6cded74c8d7d] Pending
helpers_test.go:344: "test-job-nginx-0" [967da319-001f-438c-936d-6cded74c8d7d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [967da319-001f-438c-936d-6cded74c8d7d] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.00617525s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-793000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-793000 addons disable volcano --alsologtostderr -v=1: (10.122540083s)
--- PASS: TestAddons/serial/Volcano (38.36s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-793000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-793000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-793000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-793000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-793000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d9d28161-5d94-4ed0-be48-2e56f8462173] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d9d28161-5d94-4ed0-be48-2e56f8462173] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.010091167s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-793000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-793000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-793000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-793000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-793000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-793000 addons disable ingress --alsologtostderr -v=1: (7.241173167s)
--- PASS: TestAddons/parallel/Ingress (18.48s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-55527" [f95fa10f-918a-4696-86a8-b79fd9005290] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.008795375s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-793000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-793000: (5.3018685s)
--- PASS: TestAddons/parallel/InspektorGadget (10.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.214167ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-4bnvk" [7c0760bb-1ad8-4130-b346-cd4764ad0de8] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00978075s
addons_test.go:417: (dbg) Run:  kubectl --context addons-793000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-793000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.28s)

                                                
                                    
x
+
TestAddons/parallel/CSI (35.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 2.53ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-793000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-793000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f74296a4-68bf-481e-a18e-9684ad52845f] Pending
helpers_test.go:344: "task-pv-pod" [f74296a4-68bf-481e-a18e-9684ad52845f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f74296a4-68bf-481e-a18e-9684ad52845f] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.006807291s
addons_test.go:590: (dbg) Run:  kubectl --context addons-793000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-793000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-793000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-793000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-793000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-793000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-793000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [9706efd2-960d-47ef-903b-ba1de78c5c0d] Pending
helpers_test.go:344: "task-pv-pod-restore" [9706efd2-960d-47ef-903b-ba1de78c5c0d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [9706efd2-960d-47ef-903b-ba1de78c5c0d] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.010645375s
addons_test.go:632: (dbg) Run:  kubectl --context addons-793000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-793000 delete pod task-pv-pod-restore: (1.082964584s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-793000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-793000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-793000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-793000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.217790458s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-793000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (35.60s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-793000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-xh7z6" [4ab985c2-4309-4f8e-a1b8-edf40c3fba86] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-xh7z6" [4ab985c2-4309-4f8e-a1b8-edf40c3fba86] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.005547292s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-793000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-793000 addons disable headlamp --alsologtostderr -v=1: (5.293015708s)
--- PASS: TestAddons/parallel/Headlamp (15.65s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.19s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-gxtcg" [828f29fe-8373-4663-bcf0-de92ea6090f8] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008288708s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-793000
--- PASS: TestAddons/parallel/CloudSpanner (5.19s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (42.03s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-793000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-793000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-793000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [841aad0d-ad75-42e9-bf12-e7e0b1374fb5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [841aad0d-ad75-42e9-bf12-e7e0b1374fb5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [841aad0d-ad75-42e9-bf12-e7e0b1374fb5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.007679834s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-793000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-793000 ssh "cat /opt/local-path-provisioner/pvc-5592a66e-8dcf-4b74-a843-f36e444a4d73_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-793000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-793000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-793000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-793000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.51196975s)
--- PASS: TestAddons/parallel/LocalPath (42.03s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.18s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-lqcnw" [8df0fa9c-1783-42d0-bf14-de8210a636d7] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.010458125s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-793000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.18s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-s65s8" [aeb08478-4883-43ea-8254-74e78fe08d80] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.007267792s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-793000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-793000 addons disable yakd --alsologtostderr -v=1: (5.259361958s)
--- PASS: TestAddons/parallel/Yakd (11.27s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (9.38s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-793000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-793000: (9.1789975s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-793000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-793000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-793000
--- PASS: TestAddons/StoppedEnableDisable (9.38s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.14s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (11.14s)

                                                
                                    
x
+
TestErrorSpam/setup (34.66s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-605000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-605000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-605000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-605000 --driver=qemu2 : (34.661334166s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0."
--- PASS: TestErrorSpam/setup (34.66s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-605000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-605000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-605000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-605000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-605000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-605000 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-605000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-605000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-605000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-605000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-605000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-605000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-605000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-605000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-605000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-605000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-605000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-605000 pause
--- PASS: TestErrorSpam/pause (0.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.59s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-605000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-605000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-605000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-605000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-605000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-605000 unpause
--- PASS: TestErrorSpam/unpause (0.59s)

                                                
                                    
x
+
TestErrorSpam/stop (64.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-605000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-605000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-605000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-605000 stop: (12.204128041s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-605000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-605000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-605000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-605000 stop: (26.036182709s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-605000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-605000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-605000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-605000 stop: (26.030001416s)
--- PASS: TestErrorSpam/stop (64.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19529-1176/.minikube/files/etc/test/nested/copy/1678/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.78s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-429000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-429000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (49.777291584s)
--- PASS: TestFunctional/serial/StartWithProxy (49.78s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.35s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-429000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-429000 --alsologtostderr -v=8: (38.348565834s)
functional_test.go:663: soft start took 38.349100792s for "functional-429000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.35s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-429000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-429000 cache add registry.k8s.io/pause:3.1: (1.903192209s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-429000 cache add registry.k8s.io/pause:3.3: (1.897198875s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-429000 cache add registry.k8s.io/pause:latest: (1.363237958s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-429000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local232588534/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 cache add minikube-local-cache-test:functional-429000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 cache delete minikube-local-cache-test:functional-429000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-429000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-429000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (68.898542ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 kubectl -- --context functional-429000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.83s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-429000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-429000 get pods: (1.019740792s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.6s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-429000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-429000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.602409541s)
functional_test.go:761: restart took 33.602525542s for "functional-429000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.60s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-429000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.67s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd1568881286/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.59s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.57s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-429000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-429000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-429000: exit status 115 (142.759959ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31122 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-429000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.57s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-429000 config get cpus: exit status 14 (31.147917ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-429000 config get cpus: exit status 14 (31.342125ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-429000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-429000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2910: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.03s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-429000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-429000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (171.979083ms)

                                                
                                                
-- stdout --
	* [functional-429000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:09:40.027076    2879 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:09:40.027514    2879 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:09:40.027520    2879 out.go:358] Setting ErrFile to fd 2...
	I0828 10:09:40.027523    2879 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:09:40.027661    2879 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:09:40.031534    2879 out.go:352] Setting JSON to false
	I0828 10:09:40.049784    2879 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2343,"bootTime":1724862637,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:09:40.049871    2879 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:09:40.061086    2879 out.go:177] * [functional-429000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0828 10:09:40.069143    2879 notify.go:220] Checking for updates...
	I0828 10:09:40.073054    2879 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:09:40.077127    2879 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:09:40.084129    2879 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:09:40.093055    2879 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:09:40.102908    2879 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:09:40.109072    2879 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:09:40.116411    2879 config.go:182] Loaded profile config "functional-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:09:40.116671    2879 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:09:40.121068    2879 out.go:177] * Using the qemu2 driver based on existing profile
	I0828 10:09:40.133126    2879 start.go:297] selected driver: qemu2
	I0828 10:09:40.133132    2879 start.go:901] validating driver "qemu2" against &{Name:functional-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-429000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:09:40.133231    2879 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:09:40.143940    2879 out.go:201] 
	W0828 10:09:40.152096    2879 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0828 10:09:40.159062    2879 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-429000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-429000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-429000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (131.138209ms)

                                                
                                                
-- stdout --
	* [functional-429000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:09:40.343382    2890 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:09:40.343506    2890 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:09:40.343510    2890 out.go:358] Setting ErrFile to fd 2...
	I0828 10:09:40.343512    2890 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:09:40.343651    2890 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
	I0828 10:09:40.346637    2890 out.go:352] Setting JSON to false
	I0828 10:09:40.364929    2890 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2343,"bootTime":1724862637,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0828 10:09:40.365022    2890 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:09:40.368109    2890 out.go:177] * [functional-429000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0828 10:09:40.382469    2890 notify.go:220] Checking for updates...
	I0828 10:09:40.386630    2890 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:09:40.390465    2890 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	I0828 10:09:40.400513    2890 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0828 10:09:40.405268    2890 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:09:40.409228    2890 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	I0828 10:09:40.413074    2890 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:09:40.417381    2890 config.go:182] Loaded profile config "functional-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:09:40.417755    2890 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:09:40.422057    2890 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0828 10:09:40.429060    2890 start.go:297] selected driver: qemu2
	I0828 10:09:40.429069    2890 start.go:901] validating driver "qemu2" against &{Name:functional-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-429000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:09:40.429170    2890 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:09:40.435214    2890 out.go:201] 
	W0828 10:09:40.438038    2890 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0828 10:09:40.442051    2890 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b2a1970f-80ce-4025-ab2f-7caf3b7ea2e8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008789041s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-429000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-429000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-429000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-429000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [50e5084b-0ba5-47ec-acf1-a28c3f093897] Pending
helpers_test.go:344: "sp-pod" [50e5084b-0ba5-47ec-acf1-a28c3f093897] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0828 10:09:10.342279    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:09:10.350331    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:09:10.363687    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:09:10.386074    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:09:10.429355    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:09:10.512750    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:09:10.676116    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:09:10.999498    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:09:11.641530    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:09:12.924820    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [50e5084b-0ba5-47ec-acf1-a28c3f093897] Running
E0828 10:09:15.488581    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.010059875s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-429000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-429000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-429000 delete -f testdata/storage-provisioner/pod.yaml: (1.147052166s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-429000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a5775ad0-2afc-4d32-81aa-f7ee8746db1b] Pending
helpers_test.go:344: "sp-pod" [a5775ad0-2afc-4d32-81aa-f7ee8746db1b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a5775ad0-2afc-4d32-81aa-f7ee8746db1b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.009191167s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-429000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.68s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh -n functional-429000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 cp functional-429000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3567334899/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh -n functional-429000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh -n functional-429000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1678/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh "sudo cat /etc/test/nested/copy/1678/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1678.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh "sudo cat /etc/ssl/certs/1678.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1678.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh "sudo cat /usr/share/ca-certificates/1678.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/16782.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh "sudo cat /etc/ssl/certs/16782.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/16782.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh "sudo cat /usr/share/ca-certificates/16782.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-429000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-429000 ssh "sudo systemctl is-active crio": exit status 1 (82.100875ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-429000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-429000
docker.io/kicbase/echo-server:functional-429000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-429000 image ls --format short --alsologtostderr:
I0828 10:09:41.138890    2911 out.go:345] Setting OutFile to fd 1 ...
I0828 10:09:41.139066    2911 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 10:09:41.139073    2911 out.go:358] Setting ErrFile to fd 2...
I0828 10:09:41.139075    2911 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 10:09:41.139217    2911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
I0828 10:09:41.139678    2911 config.go:182] Loaded profile config "functional-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 10:09:41.139744    2911 config.go:182] Loaded profile config "functional-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 10:09:41.140571    2911 ssh_runner.go:195] Run: systemctl --version
I0828 10:09:41.140578    2911 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/functional-429000/id_rsa Username:docker}
I0828 10:09:41.174750    2911 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-429000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.31.0           | cd0f0ae0ec9e0 | 91.5MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| docker.io/library/minikube-local-cache-test | functional-429000 | f2fb8fed93fd7 | 30B    |
| docker.io/library/nginx                     | alpine            | 70594c812316a | 47MB   |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| localhost/my-image                          | functional-429000 | b3951ea007743 | 1.41MB |
| docker.io/library/nginx                     | latest            | a9dfdba8b7190 | 193MB  |
| docker.io/kicbase/echo-server               | functional-429000 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | fcb0683e6bdbd | 85.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.0           | 71d55d66fd4ee | 94.7MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-scheduler              | v1.31.0           | fbbbd428abb4d | 66MB   |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-429000 image ls --format table --alsologtostderr:
I0828 10:09:44.320987    2926 out.go:345] Setting OutFile to fd 1 ...
I0828 10:09:44.321310    2926 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 10:09:44.321315    2926 out.go:358] Setting ErrFile to fd 2...
I0828 10:09:44.321317    2926 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 10:09:44.321561    2926 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
I0828 10:09:44.322290    2926 config.go:182] Loaded profile config "functional-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 10:09:44.322381    2926 config.go:182] Loaded profile config "functional-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 10:09:44.323263    2926 ssh_runner.go:195] Run: systemctl --version
I0828 10:09:44.323273    2926 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/functional-429000/id_rsa Username:docker}
I0828 10:09:44.348555    2926 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
E0828 10:09:51.338870    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
2024/08/28 10:09:52 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-429000 image ls --format json --alsologtostderr:
[{"id":"70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"94700000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696
cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"b3951ea007743bb14601281de8ef0fd0413ce32d5c649aa222627e093d1150f0","repoDigests":[],"repoTags":["localhost/my-image:functional-429000"],"size":"1410000"},{"id":"f2fb8fed93fd7c1ae42dfc9d433c81786e59ca0eb8a5f4162d238c24d0dbfd45","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-429000"],"size":"30"},{"id":"a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"66000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesu
i/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"91500000"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"85900000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-429000"],"size":"4780000"}
]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-429000 image ls --format json --alsologtostderr:
I0828 10:09:44.242328    2924 out.go:345] Setting OutFile to fd 1 ...
I0828 10:09:44.242489    2924 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 10:09:44.242492    2924 out.go:358] Setting ErrFile to fd 2...
I0828 10:09:44.242494    2924 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 10:09:44.242615    2924 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
I0828 10:09:44.243022    2924 config.go:182] Loaded profile config "functional-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 10:09:44.243090    2924 config.go:182] Loaded profile config "functional-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 10:09:44.243882    2924 ssh_runner.go:195] Run: systemctl --version
I0828 10:09:44.243893    2924 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/functional-429000/id_rsa Username:docker}
I0828 10:09:44.271378    2924 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-429000 image ls --format yaml --alsologtostderr:
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "85900000"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "94700000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-429000
size: "4780000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "66000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: f2fb8fed93fd7c1ae42dfc9d433c81786e59ca0eb8a5f4162d238c24d0dbfd45
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-429000
size: "30"
- id: 70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "91500000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-429000 image ls --format yaml --alsologtostderr:
I0828 10:09:41.238884    2913 out.go:345] Setting OutFile to fd 1 ...
I0828 10:09:41.239029    2913 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 10:09:41.239032    2913 out.go:358] Setting ErrFile to fd 2...
I0828 10:09:41.239035    2913 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 10:09:41.239168    2913 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
I0828 10:09:41.239575    2913 config.go:182] Loaded profile config "functional-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 10:09:41.239637    2913 config.go:182] Loaded profile config "functional-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 10:09:41.240596    2913 ssh_runner.go:195] Run: systemctl --version
I0828 10:09:41.240603    2913 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/functional-429000/id_rsa Username:docker}
I0828 10:09:41.266588    2913 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-429000 ssh pgrep buildkitd: exit status 1 (62.982625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 image build -t localhost/my-image:functional-429000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-429000 image build -t localhost/my-image:functional-429000 testdata/build --alsologtostderr: (2.797979666s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-429000 image build -t localhost/my-image:functional-429000 testdata/build --alsologtostderr:
I0828 10:09:41.378172    2917 out.go:345] Setting OutFile to fd 1 ...
I0828 10:09:41.378427    2917 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 10:09:41.378431    2917 out.go:358] Setting ErrFile to fd 2...
I0828 10:09:41.378433    2917 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 10:09:41.378593    2917 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1176/.minikube/bin
I0828 10:09:41.379095    2917 config.go:182] Loaded profile config "functional-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 10:09:41.385373    2917 config.go:182] Loaded profile config "functional-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 10:09:41.386265    2917 ssh_runner.go:195] Run: systemctl --version
I0828 10:09:41.386277    2917 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1176/.minikube/machines/functional-429000/id_rsa Username:docker}
I0828 10:09:41.414233    2917 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2135866849.tar
I0828 10:09:41.414301    2917 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0828 10:09:41.419397    2917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2135866849.tar
I0828 10:09:41.421538    2917 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2135866849.tar: stat -c "%s %y" /var/lib/minikube/build/build.2135866849.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2135866849.tar': No such file or directory
I0828 10:09:41.421560    2917 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2135866849.tar --> /var/lib/minikube/build/build.2135866849.tar (3072 bytes)
I0828 10:09:41.435205    2917 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2135866849
I0828 10:09:41.438771    2917 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2135866849 -xf /var/lib/minikube/build/build.2135866849.tar
I0828 10:09:41.442170    2917 docker.go:360] Building image: /var/lib/minikube/build/build.2135866849
I0828 10:09:41.442245    2917 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-429000 /var/lib/minikube/build/build.2135866849
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.9s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 1.0s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:b3951ea007743bb14601281de8ef0fd0413ce32d5c649aa222627e093d1150f0
#8 writing image sha256:b3951ea007743bb14601281de8ef0fd0413ce32d5c649aa222627e093d1150f0 done
#8 naming to localhost/my-image:functional-429000 done
#8 DONE 0.0s
I0828 10:09:44.071426    2917 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-429000 /var/lib/minikube/build/build.2135866849: (2.629222959s)
I0828 10:09:44.071486    2917 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2135866849
I0828 10:09:44.075868    2917 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2135866849.tar
I0828 10:09:44.078989    2917 build_images.go:217] Built localhost/my-image:functional-429000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2135866849.tar
I0828 10:09:44.079002    2917 build_images.go:133] succeeded building to: functional-429000
I0828 10:09:44.079006    2917 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.670070084s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-429000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-429000 docker-env) && out/minikube-darwin-arm64 status -p functional-429000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-429000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-429000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-429000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-www5b" [50d55603-c843-4cb6-a1de-603012f70725] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-www5b" [50d55603-c843-4cb6-a1de-603012f70725] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.010916208s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 image load --daemon kicbase/echo-server:functional-429000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 image load --daemon kicbase/echo-server:functional-429000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-429000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 image load --daemon kicbase/echo-server:functional-429000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 image save kicbase/echo-server:functional-429000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 image rm kicbase/echo-server:functional-429000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-429000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 image save --daemon kicbase/echo-server:functional-429000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-429000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-429000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-429000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-429000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2728: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-429000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-429000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-429000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [39f87edc-dfbb-4246-9ddb-5fb1075a48ee] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [39f87edc-dfbb-4246-9ddb-5fb1075a48ee] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.00794025s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 service list -o json
functional_test.go:1494: Took "81.807333ms" to run "out/minikube-darwin-arm64 -p functional-429000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:30371
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:30371
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-429000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.88.214 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-429000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "86.578833ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "33.789334ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "86.419334ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "33.54ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-429000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1535466881/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724864969705486000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1535466881/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724864969705486000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1535466881/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724864969705486000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1535466881/001/test-1724864969705486000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-429000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (63.965584ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 28 17:09 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 28 17:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 28 17:09 test-1724864969705486000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh cat /mount-9p/test-1724864969705486000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-429000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4fa8f109-116d-447f-b5e3-d7d25c9f0103] Pending
E0828 10:09:30.855657    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [4fa8f109-116d-447f-b5e3-d7d25c9f0103] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4fa8f109-116d-447f-b5e3-d7d25c9f0103] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4fa8f109-116d-447f-b5e3-d7d25c9f0103] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.0090695s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-429000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-429000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1535466881/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-429000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port603445905/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-429000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (62.184916ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-429000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port603445905/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-429000 ssh "sudo umount -f /mount-9p": exit status 1 (63.991333ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-429000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-429000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port603445905/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-429000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3999750471/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-429000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3999750471/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-429000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3999750471/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-429000 ssh "findmnt -T" /mount1: exit status 1 (72.741458ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-429000 ssh "findmnt -T" /mount3: exit status 1 (57.9125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-429000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-429000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-429000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3999750471/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-429000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3999750471/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-429000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3999750471/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-429000
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-429000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-429000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (206.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-092000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0828 10:10:32.301677    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:11:54.223602    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-092000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m26.352860959s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (206.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-092000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-092000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-092000 -- rollout status deployment/busybox: (4.7366235s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-092000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-092000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-092000 -- exec busybox-7dff88458-5559j -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-092000 -- exec busybox-7dff88458-nvsp5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-092000 -- exec busybox-7dff88458-zdgj7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-092000 -- exec busybox-7dff88458-5559j -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-092000 -- exec busybox-7dff88458-nvsp5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-092000 -- exec busybox-7dff88458-zdgj7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-092000 -- exec busybox-7dff88458-5559j -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-092000 -- exec busybox-7dff88458-nvsp5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-092000 -- exec busybox-7dff88458-zdgj7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-092000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-092000 -- exec busybox-7dff88458-5559j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-092000 -- exec busybox-7dff88458-5559j -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-092000 -- exec busybox-7dff88458-nvsp5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-092000 -- exec busybox-7dff88458-nvsp5 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-092000 -- exec busybox-7dff88458-zdgj7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-092000 -- exec busybox-7dff88458-zdgj7 -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (51.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-092000 -v=7 --alsologtostderr
E0828 10:13:50.900848    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:13:50.908445    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:13:50.921810    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:13:50.945157    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:13:50.988312    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:13:51.071423    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:13:51.234837    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:13:51.556481    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:13:52.199947    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:13:53.483262    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:13:56.046663    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:14:01.170060    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:14:10.336378    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:14:11.412243    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-092000 -v=7 --alsologtostderr: (51.554053667s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (51.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-092000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 cp testdata/cp-test.txt ha-092000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 cp ha-092000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3289475244/001/cp-test_ha-092000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 cp ha-092000:/home/docker/cp-test.txt ha-092000-m02:/home/docker/cp-test_ha-092000_ha-092000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000-m02 "sudo cat /home/docker/cp-test_ha-092000_ha-092000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 cp ha-092000:/home/docker/cp-test.txt ha-092000-m03:/home/docker/cp-test_ha-092000_ha-092000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000-m03 "sudo cat /home/docker/cp-test_ha-092000_ha-092000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 cp ha-092000:/home/docker/cp-test.txt ha-092000-m04:/home/docker/cp-test_ha-092000_ha-092000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000-m04 "sudo cat /home/docker/cp-test_ha-092000_ha-092000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 cp testdata/cp-test.txt ha-092000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 cp ha-092000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3289475244/001/cp-test_ha-092000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 cp ha-092000-m02:/home/docker/cp-test.txt ha-092000:/home/docker/cp-test_ha-092000-m02_ha-092000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000 "sudo cat /home/docker/cp-test_ha-092000-m02_ha-092000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 cp ha-092000-m02:/home/docker/cp-test.txt ha-092000-m03:/home/docker/cp-test_ha-092000-m02_ha-092000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000-m03 "sudo cat /home/docker/cp-test_ha-092000-m02_ha-092000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 cp ha-092000-m02:/home/docker/cp-test.txt ha-092000-m04:/home/docker/cp-test_ha-092000-m02_ha-092000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000-m04 "sudo cat /home/docker/cp-test_ha-092000-m02_ha-092000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 cp testdata/cp-test.txt ha-092000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 cp ha-092000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3289475244/001/cp-test_ha-092000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 cp ha-092000-m03:/home/docker/cp-test.txt ha-092000:/home/docker/cp-test_ha-092000-m03_ha-092000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000 "sudo cat /home/docker/cp-test_ha-092000-m03_ha-092000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 cp ha-092000-m03:/home/docker/cp-test.txt ha-092000-m02:/home/docker/cp-test_ha-092000-m03_ha-092000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000-m02 "sudo cat /home/docker/cp-test_ha-092000-m03_ha-092000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 cp ha-092000-m03:/home/docker/cp-test.txt ha-092000-m04:/home/docker/cp-test_ha-092000-m03_ha-092000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000-m04 "sudo cat /home/docker/cp-test_ha-092000-m03_ha-092000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 cp testdata/cp-test.txt ha-092000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 cp ha-092000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3289475244/001/cp-test_ha-092000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 cp ha-092000-m04:/home/docker/cp-test.txt ha-092000:/home/docker/cp-test_ha-092000-m04_ha-092000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000 "sudo cat /home/docker/cp-test_ha-092000-m04_ha-092000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 cp ha-092000-m04:/home/docker/cp-test.txt ha-092000-m02:/home/docker/cp-test_ha-092000-m04_ha-092000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000-m02 "sudo cat /home/docker/cp-test_ha-092000-m04_ha-092000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 cp ha-092000-m04:/home/docker/cp-test.txt ha-092000-m03:/home/docker/cp-test_ha-092000-m04_ha-092000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-092000 ssh -n ha-092000-m03 "sudo cat /home/docker/cp-test_ha-092000-m04_ha-092000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (79.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0828 10:23:50.886234    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/functional-429000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:24:10.322492    1678 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1176/.minikube/profiles/addons-793000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m19.271925583s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (79.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.91s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-940000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-940000 --output=json --user=testUser: (1.912843417s)
--- PASS: TestJSONOutput/stop/Command (1.91s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-427000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-427000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.1595ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3ea4d3e5-98ba-4077-8977-99ea1de75e4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-427000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7dbbd13f-686d-4522-8f57-fc462ab14e93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19529"}}
	{"specversion":"1.0","id":"87113a08-106a-4edc-b3a7-fce1b8ba5518","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig"}}
	{"specversion":"1.0","id":"e749e4ac-a13b-4325-8a6f-dcd6cad83624","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"8646df94-0b1b-4b80-b47f-d5d6ed6763ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4efecb36-38cc-4bf8-a1fa-302d3b756194","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube"}}
	{"specversion":"1.0","id":"b1087337-192b-4ac6-a322-2d65f213b82c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4cba1404-94ef-46bc-8715-1de670e84abd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-427000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-427000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-188000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-188000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (99.6125ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-188000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1176/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1176/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-188000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-188000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.653333ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-188000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-188000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.676523625s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.771614542s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-188000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-188000: (2.089413458s)
--- PASS: TestNoKubernetes/serial/Stop (2.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-188000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-188000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (39.18875ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-188000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-188000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-801000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-198000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-198000 --alsologtostderr -v=3: (2.131674834s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-198000 -n old-k8s-version-198000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-198000 -n old-k8s-version-198000: exit status 7 (56.252167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-198000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-178000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-178000 --alsologtostderr -v=3: (3.079228125s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-178000 -n no-preload-178000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-178000 -n no-preload-178000: exit status 7 (56.51775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-178000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-555000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-555000 --alsologtostderr -v=3: (3.523642708s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-713000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-713000 --alsologtostderr -v=3: (3.867406459s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-555000 -n embed-certs-555000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-555000 -n embed-certs-555000: exit status 7 (56.230833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-555000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-713000 -n default-k8s-diff-port-713000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-713000 -n default-k8s-diff-port-713000: exit status 7 (54.314541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-713000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-413000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-413000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-413000 --alsologtostderr -v=3: (3.5285855s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-413000 -n newest-cni-413000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-413000 -n newest-cni-413000: exit status 7 (60.286375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-413000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/274)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-160000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-160000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-160000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-160000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-160000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-160000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-160000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-160000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-160000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-160000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-160000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-160000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-160000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-160000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-160000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-160000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-160000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-160000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-160000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-160000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-160000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-160000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-160000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-160000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-160000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-160000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-160000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-160000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-160000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-160000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-160000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-160000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-160000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-160000"

                                                
                                                
----------------------- debugLogs end: cilium-160000 [took: 2.172636334s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-160000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-160000
--- SKIP: TestNetworkPlugins/group/cilium (2.28s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-407000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-407000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard