Test Report: QEMU_macOS 19282

                    
                      32a626fe994c067a2713ce1ccf4f75414e4ff172:2024-07-17:35384
                    
                

Test fail (120/212)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.74
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 9.9
36 TestAddons/Setup 10
37 TestCertOptions 9.96
38 TestCertExpiration 195.2
39 TestDockerFlags 10.14
40 TestForceSystemdFlag 9.94
41 TestForceSystemdEnv 10.52
47 TestErrorSpam/setup 9.79
56 TestFunctional/serial/StartWithProxy 9.94
58 TestFunctional/serial/SoftStart 5.25
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.16
70 TestFunctional/serial/MinikubeKubectlCmd 0.73
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.97
72 TestFunctional/serial/ExtraConfig 5.25
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.06
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.12
86 TestFunctional/parallel/ServiceCmdConnect 0.13
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.11
91 TestFunctional/parallel/CpCmd 0.27
93 TestFunctional/parallel/FileSync 0.07
94 TestFunctional/parallel/CertSync 0.28
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
104 TestFunctional/parallel/Version/components 0.04
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.03
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
109 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
111 TestFunctional/parallel/DockerEnv/bash 0.05
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
115 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
116 TestFunctional/parallel/ServiceCmd/List 0.05
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
119 TestFunctional/parallel/ServiceCmd/Format 0.05
120 TestFunctional/parallel/ServiceCmd/URL 0.04
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.07
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 91.18
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.3
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.27
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.13
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 37.63
150 TestMultiControlPlane/serial/StartCluster 10.05
151 TestMultiControlPlane/serial/DeployApp 109.01
152 TestMultiControlPlane/serial/PingHostFromPods 0.09
153 TestMultiControlPlane/serial/AddWorkerNode 0.07
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.08
156 TestMultiControlPlane/serial/CopyFile 0.06
157 TestMultiControlPlane/serial/StopSecondaryNode 0.11
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.08
159 TestMultiControlPlane/serial/RestartSecondaryNode 55.96
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.08
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 7.33
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
164 TestMultiControlPlane/serial/StopCluster 2.08
165 TestMultiControlPlane/serial/RestartCluster 5.25
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
167 TestMultiControlPlane/serial/AddSecondaryNode 0.07
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
171 TestImageBuild/serial/Setup 9.82
174 TestJSONOutput/start/Command 9.79
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.04
203 TestMinikubeProfile 10.07
206 TestMountStart/serial/StartWithMountFirst 9.92
209 TestMultiNode/serial/FreshStart2Nodes 9.91
210 TestMultiNode/serial/DeployApp2Nodes 80.4
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.07
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.08
215 TestMultiNode/serial/CopyFile 0.06
216 TestMultiNode/serial/StopNode 0.14
217 TestMultiNode/serial/StartAfterStop 53.41
218 TestMultiNode/serial/RestartKeepsNodes 7.11
219 TestMultiNode/serial/DeleteNode 0.1
220 TestMultiNode/serial/StopMultiNode 1.89
221 TestMultiNode/serial/RestartMultiNode 5.25
222 TestMultiNode/serial/ValidateNameConflict 20.13
226 TestPreload 9.89
228 TestScheduledStopUnix 9.89
229 TestSkaffold 12.22
232 TestRunningBinaryUpgrade 708.68
234 TestKubernetesUpgrade 18.93
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.31
248 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.17
250 TestStoppedBinaryUpgrade/Upgrade 577.79
252 TestPause/serial/Start 9.87
263 TestNoKubernetes/serial/StartWithK8s 9.98
264 TestNoKubernetes/serial/StartWithStopK8s 5.26
265 TestNoKubernetes/serial/Start 5.27
267 TestNoKubernetes/serial/ProfileList 279.39
268 TestNetworkPlugins/group/auto/Start 9.93
269 TestNetworkPlugins/group/calico/Start 9.98
270 TestNetworkPlugins/group/custom-flannel/Start 9.95
271 TestNetworkPlugins/group/false/Start 9.89
272 TestNetworkPlugins/group/kindnet/Start 9.9
273 TestNetworkPlugins/group/flannel/Start 9.89
274 TestNetworkPlugins/group/enable-default-cni/Start 9.93
275 TestNetworkPlugins/group/bridge/Start 10.01
276 TestNetworkPlugins/group/kubenet/Start 7201.093
x
+
TestDownloadOnly/v1.20.0/json-events (10.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-716000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-716000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (10.739367167s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"21a7c13a-0504-4055-8b9a-fe399c1fa0a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-716000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c5661e99-349d-46f4-9af4-aa0e3eba752b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19282"}}
	{"specversion":"1.0","id":"9b34bd8a-64a6-4c5a-ad00-8f3c1541d360","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig"}}
	{"specversion":"1.0","id":"b2d31a54-0766-4e1b-9c36-93b239312705","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"3a988a8e-0fec-41da-9099-95ab23365064","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d1095add-e3f7-4e01-9851-2c322635e4f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube"}}
	{"specversion":"1.0","id":"a5018466-f2f3-4112-b7df-06a03fe3b540","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"77607ec5-9792-4fa4-8480-df126f23a732","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d2d9de1d-585a-4f34-8cd6-0de170eb2f7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"c6ec974f-53b2-4e06-b43e-27136949c118","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"31530c11-7771-4959-9de7-53a6b2dffc54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-716000\" primary control-plane node in \"download-only-716000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6906c568-cc73-4e74-9aed-a49d94bdc03f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"65ca2114-fad0-498c-b029-840cb003ddf7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19282-6331/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108b71a60 0x108b71a60 0x108b71a60 0x108b71a60 0x108b71a60 0x108b71a60 0x108b71a60] Decompressors:map[bz2:0x1400098f4d0 gz:0x1400098f4d8 tar:0x1400098f470 tar.bz2:0x1400098f480 tar.gz:0x1400098f490 tar.xz:0x1400098f4a0 tar.zst:0x1400098f4c0 tbz2:0x1400098f480 tgz:0x14
00098f490 txz:0x1400098f4a0 tzst:0x1400098f4c0 xz:0x1400098f4e0 zip:0x1400098f4f0 zst:0x1400098f4e8] Getters:map[file:0x140016e6630 http:0x140000b4d70 https:0x140000b4dc0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"68b3acd7-7b8a-45a1-bfe1-2fb873eecf3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:53:37.448205    6822 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:53:37.448367    6822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:53:37.448370    6822 out.go:304] Setting ErrFile to fd 2...
	I0717 10:53:37.448372    6822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:53:37.448532    6822 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	W0717 10:53:37.448658    6822 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19282-6331/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19282-6331/.minikube/config/config.json: no such file or directory
	I0717 10:53:37.449953    6822 out.go:298] Setting JSON to true
	I0717 10:53:37.465908    6822 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4989,"bootTime":1721233828,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 10:53:37.465981    6822 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:53:37.470876    6822 out.go:97] [download-only-716000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 10:53:37.470987    6822 notify.go:220] Checking for updates...
	W0717 10:53:37.471041    6822 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball: no such file or directory
	I0717 10:53:37.473884    6822 out.go:169] MINIKUBE_LOCATION=19282
	I0717 10:53:37.482860    6822 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 10:53:37.490821    6822 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:53:37.493893    6822 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:53:37.496843    6822 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	W0717 10:53:37.502852    6822 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 10:53:37.503043    6822 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:53:37.504518    6822 out.go:97] Using the qemu2 driver based on user configuration
	I0717 10:53:37.504537    6822 start.go:297] selected driver: qemu2
	I0717 10:53:37.504552    6822 start.go:901] validating driver "qemu2" against <nil>
	I0717 10:53:37.504637    6822 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 10:53:37.507820    6822 out.go:169] Automatically selected the socket_vmnet network
	I0717 10:53:37.513023    6822 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0717 10:53:37.513163    6822 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 10:53:37.513200    6822 cni.go:84] Creating CNI manager for ""
	I0717 10:53:37.513219    6822 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0717 10:53:37.513282    6822 start.go:340] cluster config:
	{Name:download-only-716000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-716000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:53:37.516891    6822 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:53:37.520829    6822 out.go:97] Downloading VM boot image ...
	I0717 10:53:37.520845    6822 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso
	I0717 10:53:41.707750    6822 out.go:97] Starting "download-only-716000" primary control-plane node in "download-only-716000" cluster
	I0717 10:53:41.707788    6822 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0717 10:53:41.763035    6822 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0717 10:53:41.763057    6822 cache.go:56] Caching tarball of preloaded images
	I0717 10:53:41.763204    6822 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0717 10:53:41.768310    6822 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0717 10:53:41.768317    6822 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0717 10:53:41.850504    6822 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0717 10:53:47.071999    6822 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0717 10:53:47.072149    6822 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0717 10:53:47.768666    6822 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0717 10:53:47.768854    6822 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/download-only-716000/config.json ...
	I0717 10:53:47.768886    6822 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/download-only-716000/config.json: {Name:mkcd9c2c4d5071025b18638894cd4ee6de6c5251 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:53:47.769146    6822 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0717 10:53:47.769333    6822 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0717 10:53:48.112704    6822 out.go:169] 
	W0717 10:53:48.116758    6822 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19282-6331/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108b71a60 0x108b71a60 0x108b71a60 0x108b71a60 0x108b71a60 0x108b71a60 0x108b71a60] Decompressors:map[bz2:0x1400098f4d0 gz:0x1400098f4d8 tar:0x1400098f470 tar.bz2:0x1400098f480 tar.gz:0x1400098f490 tar.xz:0x1400098f4a0 tar.zst:0x1400098f4c0 tbz2:0x1400098f480 tgz:0x1400098f490 txz:0x1400098f4a0 tzst:0x1400098f4c0 xz:0x1400098f4e0 zip:0x1400098f4f0 zst:0x1400098f4e8] Getters:map[file:0x140016e6630 http:0x140000b4d70 https:0x140000b4dc0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0717 10:53:48.116786    6822 out_reason.go:110] 
	W0717 10:53:48.123657    6822 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:53:48.127671    6822 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-716000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (10.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19282-6331/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.9s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-806000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-806000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.74959225s)

                                                
                                                
-- stdout --
	* [offline-docker-806000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-806000" primary control-plane node in "offline-docker-806000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-806000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:05:00.290009    8317 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:05:00.290145    8317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:05:00.290149    8317 out.go:304] Setting ErrFile to fd 2...
	I0717 11:05:00.290151    8317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:05:00.290281    8317 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:05:00.291364    8317 out.go:298] Setting JSON to false
	I0717 11:05:00.309085    8317 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5672,"bootTime":1721233828,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 11:05:00.309187    8317 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:05:00.313732    8317 out.go:177] * [offline-docker-806000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:05:00.320831    8317 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 11:05:00.320854    8317 notify.go:220] Checking for updates...
	I0717 11:05:00.325706    8317 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:05:00.328695    8317 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:05:00.331706    8317 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:05:00.334680    8317 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 11:05:00.337743    8317 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:05:00.341089    8317 config.go:182] Loaded profile config "multinode-931000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:05:00.341158    8317 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:05:00.345644    8317 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:05:00.352761    8317 start.go:297] selected driver: qemu2
	I0717 11:05:00.352773    8317 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:05:00.352781    8317 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:05:00.354850    8317 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:05:00.357605    8317 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:05:00.360894    8317 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:05:00.360909    8317 cni.go:84] Creating CNI manager for ""
	I0717 11:05:00.360916    8317 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:05:00.360920    8317 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 11:05:00.360950    8317 start.go:340] cluster config:
	{Name:offline-docker-806000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:offline-docker-806000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:05:00.364559    8317 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:05:00.369714    8317 out.go:177] * Starting "offline-docker-806000" primary control-plane node in "offline-docker-806000" cluster
	I0717 11:05:00.373697    8317 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:05:00.373723    8317 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:05:00.373734    8317 cache.go:56] Caching tarball of preloaded images
	I0717 11:05:00.373804    8317 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:05:00.373810    8317 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:05:00.373890    8317 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/offline-docker-806000/config.json ...
	I0717 11:05:00.373901    8317 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/offline-docker-806000/config.json: {Name:mk1a6e5a37258c3101a29227bbb55d75f4cc12cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:05:00.374190    8317 start.go:360] acquireMachinesLock for offline-docker-806000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:05:00.374225    8317 start.go:364] duration metric: took 25.25µs to acquireMachinesLock for "offline-docker-806000"
	I0717 11:05:00.374238    8317 start.go:93] Provisioning new machine with config: &{Name:offline-docker-806000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.2 ClusterName:offline-docker-806000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:05:00.374265    8317 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:05:00.378653    8317 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0717 11:05:00.394719    8317 start.go:159] libmachine.API.Create for "offline-docker-806000" (driver="qemu2")
	I0717 11:05:00.394754    8317 client.go:168] LocalClient.Create starting
	I0717 11:05:00.394828    8317 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:05:00.394858    8317 main.go:141] libmachine: Decoding PEM data...
	I0717 11:05:00.394868    8317 main.go:141] libmachine: Parsing certificate...
	I0717 11:05:00.394914    8317 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:05:00.394937    8317 main.go:141] libmachine: Decoding PEM data...
	I0717 11:05:00.394949    8317 main.go:141] libmachine: Parsing certificate...
	I0717 11:05:00.395339    8317 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:05:00.535584    8317 main.go:141] libmachine: Creating SSH key...
	I0717 11:05:00.612602    8317 main.go:141] libmachine: Creating Disk image...
	I0717 11:05:00.612612    8317 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:05:00.612992    8317 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/offline-docker-806000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/offline-docker-806000/disk.qcow2
	I0717 11:05:00.632962    8317 main.go:141] libmachine: STDOUT: 
	I0717 11:05:00.632982    8317 main.go:141] libmachine: STDERR: 
	I0717 11:05:00.633035    8317 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/offline-docker-806000/disk.qcow2 +20000M
	I0717 11:05:00.641671    8317 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:05:00.641693    8317 main.go:141] libmachine: STDERR: 
	I0717 11:05:00.641720    8317 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/offline-docker-806000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/offline-docker-806000/disk.qcow2
	I0717 11:05:00.641726    8317 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:05:00.641737    8317 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:05:00.641769    8317 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/offline-docker-806000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/offline-docker-806000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/offline-docker-806000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:6f:0c:ed:bf:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/offline-docker-806000/disk.qcow2
	I0717 11:05:00.643583    8317 main.go:141] libmachine: STDOUT: 
	I0717 11:05:00.643599    8317 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:05:00.643624    8317 client.go:171] duration metric: took 248.864875ms to LocalClient.Create
	I0717 11:05:02.645705    8317 start.go:128] duration metric: took 2.271428125s to createHost
	I0717 11:05:02.645723    8317 start.go:83] releasing machines lock for "offline-docker-806000", held for 2.271490125s
	W0717 11:05:02.645739    8317 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:05:02.652789    8317 out.go:177] * Deleting "offline-docker-806000" in qemu2 ...
	W0717 11:05:02.663940    8317 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:05:02.663953    8317 start.go:729] Will try again in 5 seconds ...
	I0717 11:05:07.665798    8317 start.go:360] acquireMachinesLock for offline-docker-806000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:05:07.666274    8317 start.go:364] duration metric: took 374.667µs to acquireMachinesLock for "offline-docker-806000"
	I0717 11:05:07.666399    8317 start.go:93] Provisioning new machine with config: &{Name:offline-docker-806000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.2 ClusterName:offline-docker-806000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:05:07.666663    8317 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:05:07.676059    8317 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0717 11:05:07.723004    8317 start.go:159] libmachine.API.Create for "offline-docker-806000" (driver="qemu2")
	I0717 11:05:07.723067    8317 client.go:168] LocalClient.Create starting
	I0717 11:05:07.723175    8317 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:05:07.723240    8317 main.go:141] libmachine: Decoding PEM data...
	I0717 11:05:07.723259    8317 main.go:141] libmachine: Parsing certificate...
	I0717 11:05:07.723321    8317 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:05:07.723364    8317 main.go:141] libmachine: Decoding PEM data...
	I0717 11:05:07.723379    8317 main.go:141] libmachine: Parsing certificate...
	I0717 11:05:07.723898    8317 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:05:07.871676    8317 main.go:141] libmachine: Creating SSH key...
	I0717 11:05:07.947799    8317 main.go:141] libmachine: Creating Disk image...
	I0717 11:05:07.947804    8317 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:05:07.948011    8317 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/offline-docker-806000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/offline-docker-806000/disk.qcow2
	I0717 11:05:07.956980    8317 main.go:141] libmachine: STDOUT: 
	I0717 11:05:07.956998    8317 main.go:141] libmachine: STDERR: 
	I0717 11:05:07.957040    8317 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/offline-docker-806000/disk.qcow2 +20000M
	I0717 11:05:07.964759    8317 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:05:07.964775    8317 main.go:141] libmachine: STDERR: 
	I0717 11:05:07.964788    8317 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/offline-docker-806000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/offline-docker-806000/disk.qcow2
	I0717 11:05:07.964792    8317 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:05:07.964804    8317 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:05:07.964841    8317 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/offline-docker-806000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/offline-docker-806000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/offline-docker-806000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:4e:20:2a:d6:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/offline-docker-806000/disk.qcow2
	I0717 11:05:07.966430    8317 main.go:141] libmachine: STDOUT: 
	I0717 11:05:07.966447    8317 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:05:07.966459    8317 client.go:171] duration metric: took 243.386791ms to LocalClient.Create
	I0717 11:05:09.968641    8317 start.go:128] duration metric: took 2.301947459s to createHost
	I0717 11:05:09.968685    8317 start.go:83] releasing machines lock for "offline-docker-806000", held for 2.30238575s
	W0717 11:05:09.969049    8317 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-806000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-806000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:05:09.981067    8317 out.go:177] 
	W0717 11:05:09.985054    8317 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:05:09.985085    8317 out.go:239] * 
	* 
	W0717 11:05:09.988070    8317 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:05:09.995989    8317 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-806000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-17 11:05:10.01379 -0700 PDT m=+692.658992709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-806000 -n offline-docker-806000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-806000 -n offline-docker-806000: exit status 7 (64.605833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-806000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-806000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-806000
--- FAIL: TestOffline (9.90s)

                                                
                                    
x
+
TestAddons/Setup (10s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-562000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-562000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (9.994908625s)

                                                
                                                
-- stdout --
	* [addons-562000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-562000" primary control-plane node in "addons-562000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-562000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:54:02.247991    6929 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:54:02.248133    6929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:54:02.248136    6929 out.go:304] Setting ErrFile to fd 2...
	I0717 10:54:02.248138    6929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:54:02.248266    6929 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 10:54:02.249460    6929 out.go:298] Setting JSON to false
	I0717 10:54:02.265910    6929 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5014,"bootTime":1721233828,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 10:54:02.265969    6929 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:54:02.270333    6929 out.go:177] * [addons-562000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 10:54:02.277300    6929 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 10:54:02.277364    6929 notify.go:220] Checking for updates...
	I0717 10:54:02.284322    6929 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 10:54:02.287289    6929 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:54:02.290280    6929 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:54:02.293279    6929 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 10:54:02.296269    6929 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:54:02.299465    6929 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:54:02.302281    6929 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 10:54:02.309285    6929 start.go:297] selected driver: qemu2
	I0717 10:54:02.309292    6929 start.go:901] validating driver "qemu2" against <nil>
	I0717 10:54:02.309301    6929 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:54:02.311630    6929 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 10:54:02.312801    6929 out.go:177] * Automatically selected the socket_vmnet network
	I0717 10:54:02.315321    6929 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:54:02.315334    6929 cni.go:84] Creating CNI manager for ""
	I0717 10:54:02.315340    6929 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 10:54:02.315343    6929 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 10:54:02.315379    6929 start.go:340] cluster config:
	{Name:addons-562000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-562000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:54:02.319268    6929 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:54:02.326274    6929 out.go:177] * Starting "addons-562000" primary control-plane node in "addons-562000" cluster
	I0717 10:54:02.330291    6929 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:54:02.330304    6929 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 10:54:02.330314    6929 cache.go:56] Caching tarball of preloaded images
	I0717 10:54:02.330366    6929 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 10:54:02.330376    6929 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:54:02.330571    6929 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/addons-562000/config.json ...
	I0717 10:54:02.330583    6929 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/addons-562000/config.json: {Name:mkf5ed1e64dfc2c076b84c1bdf88330cb6985bb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:54:02.331032    6929 start.go:360] acquireMachinesLock for addons-562000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:54:02.331096    6929 start.go:364] duration metric: took 58µs to acquireMachinesLock for "addons-562000"
	I0717 10:54:02.331106    6929 start.go:93] Provisioning new machine with config: &{Name:addons-562000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:addons-562000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:54:02.331143    6929 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 10:54:02.339250    6929 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0717 10:54:02.357954    6929 start.go:159] libmachine.API.Create for "addons-562000" (driver="qemu2")
	I0717 10:54:02.357989    6929 client.go:168] LocalClient.Create starting
	I0717 10:54:02.358095    6929 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 10:54:02.434202    6929 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 10:54:02.461990    6929 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 10:54:02.654640    6929 main.go:141] libmachine: Creating SSH key...
	I0717 10:54:02.806980    6929 main.go:141] libmachine: Creating Disk image...
	I0717 10:54:02.806989    6929 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 10:54:02.807188    6929 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/addons-562000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/addons-562000/disk.qcow2
	I0717 10:54:02.816272    6929 main.go:141] libmachine: STDOUT: 
	I0717 10:54:02.816293    6929 main.go:141] libmachine: STDERR: 
	I0717 10:54:02.816347    6929 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/addons-562000/disk.qcow2 +20000M
	I0717 10:54:02.824212    6929 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 10:54:02.824234    6929 main.go:141] libmachine: STDERR: 
	I0717 10:54:02.824247    6929 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/addons-562000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/addons-562000/disk.qcow2
	I0717 10:54:02.824251    6929 main.go:141] libmachine: Starting QEMU VM...
	I0717 10:54:02.824280    6929 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:54:02.824304    6929 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/addons-562000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/addons-562000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/addons-562000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:4b:bf:3a:ac:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/addons-562000/disk.qcow2
	I0717 10:54:02.825896    6929 main.go:141] libmachine: STDOUT: 
	I0717 10:54:02.825914    6929 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:54:02.825932    6929 client.go:171] duration metric: took 468.094875ms to LocalClient.Create
	I0717 10:54:04.827492    6929 start.go:128] duration metric: took 2.497128792s to createHost
	I0717 10:54:04.827561    6929 start.go:83] releasing machines lock for "addons-562000", held for 2.49725475s
	W0717 10:54:04.827645    6929 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:54:04.835969    6929 out.go:177] * Deleting "addons-562000" in qemu2 ...
	W0717 10:54:04.860834    6929 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:54:04.860861    6929 start.go:729] Will try again in 5 seconds ...
	I0717 10:54:09.861881    6929 start.go:360] acquireMachinesLock for addons-562000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:54:09.862321    6929 start.go:364] duration metric: took 334.875µs to acquireMachinesLock for "addons-562000"
	I0717 10:54:09.862452    6929 start.go:93] Provisioning new machine with config: &{Name:addons-562000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:addons-562000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:54:09.862731    6929 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 10:54:09.874380    6929 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0717 10:54:09.924755    6929 start.go:159] libmachine.API.Create for "addons-562000" (driver="qemu2")
	I0717 10:54:09.924808    6929 client.go:168] LocalClient.Create starting
	I0717 10:54:09.924929    6929 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 10:54:09.925026    6929 main.go:141] libmachine: Decoding PEM data...
	I0717 10:54:09.925048    6929 main.go:141] libmachine: Parsing certificate...
	I0717 10:54:09.925155    6929 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 10:54:09.925201    6929 main.go:141] libmachine: Decoding PEM data...
	I0717 10:54:09.925214    6929 main.go:141] libmachine: Parsing certificate...
	I0717 10:54:09.925736    6929 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 10:54:10.072639    6929 main.go:141] libmachine: Creating SSH key...
	I0717 10:54:10.149642    6929 main.go:141] libmachine: Creating Disk image...
	I0717 10:54:10.149648    6929 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 10:54:10.149849    6929 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/addons-562000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/addons-562000/disk.qcow2
	I0717 10:54:10.159077    6929 main.go:141] libmachine: STDOUT: 
	I0717 10:54:10.159092    6929 main.go:141] libmachine: STDERR: 
	I0717 10:54:10.159154    6929 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/addons-562000/disk.qcow2 +20000M
	I0717 10:54:10.166947    6929 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 10:54:10.166968    6929 main.go:141] libmachine: STDERR: 
	I0717 10:54:10.166978    6929 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/addons-562000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/addons-562000/disk.qcow2
	I0717 10:54:10.166984    6929 main.go:141] libmachine: Starting QEMU VM...
	I0717 10:54:10.166994    6929 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:54:10.167033    6929 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/addons-562000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/addons-562000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/addons-562000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:0b:1d:5d:ef:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/addons-562000/disk.qcow2
	I0717 10:54:10.168706    6929 main.go:141] libmachine: STDOUT: 
	I0717 10:54:10.168721    6929 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:54:10.168734    6929 client.go:171] duration metric: took 243.972125ms to LocalClient.Create
	I0717 10:54:12.170551    6929 start.go:128] duration metric: took 2.308233292s to createHost
	I0717 10:54:12.170630    6929 start.go:83] releasing machines lock for "addons-562000", held for 2.308744459s
	W0717 10:54:12.171047    6929 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-562000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-562000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:54:12.185654    6929 out.go:177] 
	W0717 10:54:12.189748    6929 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 10:54:12.189787    6929 out.go:239] * 
	* 
	W0717 10:54:12.192265    6929 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:54:12.198163    6929 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-562000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.00s)

                                                
                                    
x
+
TestCertOptions (9.96s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-448000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-448000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.696819209s)

                                                
                                                
-- stdout --
	* [cert-options-448000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-448000" primary control-plane node in "cert-options-448000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-448000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-448000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-448000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-448000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-448000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.892417ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-448000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-448000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-448000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-448000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-448000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-448000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.421708ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-448000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-448000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-448000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-448000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-448000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-17 11:05:40.668889 -0700 PDT m=+723.314045251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-448000 -n cert-options-448000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-448000 -n cert-options-448000: exit status 7 (30.110542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-448000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-448000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-448000
--- FAIL: TestCertOptions (9.96s)

                                                
                                    
x
+
TestCertExpiration (195.2s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-696000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-696000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.870686583s)

                                                
                                                
-- stdout --
	* [cert-expiration-696000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-696000" primary control-plane node in "cert-expiration-696000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-696000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-696000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-696000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-696000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-696000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.213681667s)

                                                
                                                
-- stdout --
	* [cert-expiration-696000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-696000" primary control-plane node in "cert-expiration-696000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-696000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-696000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-696000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-696000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-696000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-696000" primary control-plane node in "cert-expiration-696000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-696000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-696000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-696000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-17 11:08:40.807943 -0700 PDT m=+903.452824959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-696000 -n cert-expiration-696000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-696000 -n cert-expiration-696000: exit status 7 (36.178333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-696000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-696000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-696000
--- FAIL: TestCertExpiration (195.20s)

                                                
                                    
x
+
TestDockerFlags (10.14s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-816000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-816000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.913227583s)

                                                
                                                
-- stdout --
	* [docker-flags-816000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-816000" primary control-plane node in "docker-flags-816000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-816000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:05:20.702336    8508 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:05:20.702464    8508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:05:20.702470    8508 out.go:304] Setting ErrFile to fd 2...
	I0717 11:05:20.702473    8508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:05:20.702599    8508 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:05:20.703643    8508 out.go:298] Setting JSON to false
	I0717 11:05:20.719843    8508 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5692,"bootTime":1721233828,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 11:05:20.719909    8508 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:05:20.725757    8508 out.go:177] * [docker-flags-816000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:05:20.732819    8508 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 11:05:20.732873    8508 notify.go:220] Checking for updates...
	I0717 11:05:20.740765    8508 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:05:20.743794    8508 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:05:20.746724    8508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:05:20.749744    8508 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 11:05:20.752798    8508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:05:20.756103    8508 config.go:182] Loaded profile config "force-systemd-flag-016000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:05:20.756170    8508 config.go:182] Loaded profile config "multinode-931000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:05:20.756226    8508 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:05:20.760711    8508 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:05:20.767744    8508 start.go:297] selected driver: qemu2
	I0717 11:05:20.767750    8508 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:05:20.767755    8508 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:05:20.769914    8508 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:05:20.772760    8508 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:05:20.775860    8508 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0717 11:05:20.775878    8508 cni.go:84] Creating CNI manager for ""
	I0717 11:05:20.775888    8508 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:05:20.775892    8508 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 11:05:20.775931    8508 start.go:340] cluster config:
	{Name:docker-flags-816000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:docker-flags-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:05:20.779507    8508 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:05:20.786777    8508 out.go:177] * Starting "docker-flags-816000" primary control-plane node in "docker-flags-816000" cluster
	I0717 11:05:20.790781    8508 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:05:20.790801    8508 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:05:20.790816    8508 cache.go:56] Caching tarball of preloaded images
	I0717 11:05:20.790884    8508 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:05:20.790890    8508 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:05:20.790947    8508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/docker-flags-816000/config.json ...
	I0717 11:05:20.790959    8508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/docker-flags-816000/config.json: {Name:mk70493484d8d65e4529423793ab65992dea730a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:05:20.791168    8508 start.go:360] acquireMachinesLock for docker-flags-816000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:05:20.791201    8508 start.go:364] duration metric: took 26.417µs to acquireMachinesLock for "docker-flags-816000"
	I0717 11:05:20.791211    8508 start.go:93] Provisioning new machine with config: &{Name:docker-flags-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:docker-flags-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:05:20.791240    8508 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:05:20.799771    8508 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0717 11:05:20.816086    8508 start.go:159] libmachine.API.Create for "docker-flags-816000" (driver="qemu2")
	I0717 11:05:20.816114    8508 client.go:168] LocalClient.Create starting
	I0717 11:05:20.816167    8508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:05:20.816194    8508 main.go:141] libmachine: Decoding PEM data...
	I0717 11:05:20.816208    8508 main.go:141] libmachine: Parsing certificate...
	I0717 11:05:20.816254    8508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:05:20.816276    8508 main.go:141] libmachine: Decoding PEM data...
	I0717 11:05:20.816283    8508 main.go:141] libmachine: Parsing certificate...
	I0717 11:05:20.816656    8508 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:05:20.956365    8508 main.go:141] libmachine: Creating SSH key...
	I0717 11:05:21.056367    8508 main.go:141] libmachine: Creating Disk image...
	I0717 11:05:21.056372    8508 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:05:21.056580    8508 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/docker-flags-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/docker-flags-816000/disk.qcow2
	I0717 11:05:21.065950    8508 main.go:141] libmachine: STDOUT: 
	I0717 11:05:21.065966    8508 main.go:141] libmachine: STDERR: 
	I0717 11:05:21.066010    8508 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/docker-flags-816000/disk.qcow2 +20000M
	I0717 11:05:21.073847    8508 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:05:21.073861    8508 main.go:141] libmachine: STDERR: 
	I0717 11:05:21.073881    8508 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/docker-flags-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/docker-flags-816000/disk.qcow2
	I0717 11:05:21.073886    8508 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:05:21.073902    8508 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:05:21.073931    8508 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/docker-flags-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/docker-flags-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/docker-flags-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:05:6e:f3:08:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/docker-flags-816000/disk.qcow2
	I0717 11:05:21.075590    8508 main.go:141] libmachine: STDOUT: 
	I0717 11:05:21.075605    8508 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:05:21.075618    8508 client.go:171] duration metric: took 259.499834ms to LocalClient.Create
	I0717 11:05:23.077804    8508 start.go:128] duration metric: took 2.286538958s to createHost
	I0717 11:05:23.077851    8508 start.go:83] releasing machines lock for "docker-flags-816000", held for 2.28663675s
	W0717 11:05:23.077965    8508 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:05:23.093176    8508 out.go:177] * Deleting "docker-flags-816000" in qemu2 ...
	W0717 11:05:23.117511    8508 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:05:23.117544    8508 start.go:729] Will try again in 5 seconds ...
	I0717 11:05:28.119741    8508 start.go:360] acquireMachinesLock for docker-flags-816000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:05:28.120196    8508 start.go:364] duration metric: took 344.916µs to acquireMachinesLock for "docker-flags-816000"
	I0717 11:05:28.120275    8508 start.go:93] Provisioning new machine with config: &{Name:docker-flags-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:docker-flags-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:05:28.120521    8508 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:05:28.133627    8508 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0717 11:05:28.183986    8508 start.go:159] libmachine.API.Create for "docker-flags-816000" (driver="qemu2")
	I0717 11:05:28.184044    8508 client.go:168] LocalClient.Create starting
	I0717 11:05:28.184182    8508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:05:28.184247    8508 main.go:141] libmachine: Decoding PEM data...
	I0717 11:05:28.184264    8508 main.go:141] libmachine: Parsing certificate...
	I0717 11:05:28.184325    8508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:05:28.184371    8508 main.go:141] libmachine: Decoding PEM data...
	I0717 11:05:28.184385    8508 main.go:141] libmachine: Parsing certificate...
	I0717 11:05:28.184972    8508 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:05:28.334340    8508 main.go:141] libmachine: Creating SSH key...
	I0717 11:05:28.511589    8508 main.go:141] libmachine: Creating Disk image...
	I0717 11:05:28.511595    8508 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:05:28.511785    8508 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/docker-flags-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/docker-flags-816000/disk.qcow2
	I0717 11:05:28.521353    8508 main.go:141] libmachine: STDOUT: 
	I0717 11:05:28.521378    8508 main.go:141] libmachine: STDERR: 
	I0717 11:05:28.521438    8508 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/docker-flags-816000/disk.qcow2 +20000M
	I0717 11:05:28.529399    8508 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:05:28.529418    8508 main.go:141] libmachine: STDERR: 
	I0717 11:05:28.529428    8508 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/docker-flags-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/docker-flags-816000/disk.qcow2
	I0717 11:05:28.529432    8508 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:05:28.529442    8508 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:05:28.529472    8508 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/docker-flags-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/docker-flags-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/docker-flags-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:83:b4:10:7b:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/docker-flags-816000/disk.qcow2
	I0717 11:05:28.531102    8508 main.go:141] libmachine: STDOUT: 
	I0717 11:05:28.531118    8508 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:05:28.531129    8508 client.go:171] duration metric: took 347.078791ms to LocalClient.Create
	I0717 11:05:30.533306    8508 start.go:128] duration metric: took 2.412756292s to createHost
	I0717 11:05:30.533352    8508 start.go:83] releasing machines lock for "docker-flags-816000", held for 2.41312875s
	W0717 11:05:30.533708    8508 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:05:30.552301    8508 out.go:177] 
	W0717 11:05:30.556493    8508 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:05:30.556519    8508 out.go:239] * 
	* 
	W0717 11:05:30.558963    8508 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:05:30.576397    8508 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-816000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-816000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-816000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.178958ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-816000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-816000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-816000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-816000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-816000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-816000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-816000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-816000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-816000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (43.6155ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-816000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-816000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-816000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-816000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-816000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-816000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-17 11:05:30.714323 -0700 PDT m=+713.359493626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-816000 -n docker-flags-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-816000 -n docker-flags-816000: exit status 7 (29.058375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-816000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-816000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-816000
--- FAIL: TestDockerFlags (10.14s)

                                                
                                    
x
+
TestForceSystemdFlag (9.94s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-016000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-016000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.750465917s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-016000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-016000" primary control-plane node in "force-systemd-flag-016000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-016000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:05:15.803158    8487 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:05:15.803313    8487 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:05:15.803316    8487 out.go:304] Setting ErrFile to fd 2...
	I0717 11:05:15.803318    8487 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:05:15.803462    8487 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:05:15.804501    8487 out.go:298] Setting JSON to false
	I0717 11:05:15.820439    8487 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5687,"bootTime":1721233828,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 11:05:15.820519    8487 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:05:15.826510    8487 out.go:177] * [force-systemd-flag-016000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:05:15.833382    8487 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 11:05:15.833416    8487 notify.go:220] Checking for updates...
	I0717 11:05:15.841478    8487 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:05:15.844475    8487 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:05:15.847397    8487 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:05:15.850482    8487 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 11:05:15.853328    8487 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:05:15.856688    8487 config.go:182] Loaded profile config "force-systemd-env-794000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:05:15.856761    8487 config.go:182] Loaded profile config "multinode-931000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:05:15.856804    8487 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:05:15.861405    8487 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:05:15.868376    8487 start.go:297] selected driver: qemu2
	I0717 11:05:15.868382    8487 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:05:15.868387    8487 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:05:15.870547    8487 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:05:15.873451    8487 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:05:15.874929    8487 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 11:05:15.874941    8487 cni.go:84] Creating CNI manager for ""
	I0717 11:05:15.874947    8487 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:05:15.874951    8487 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 11:05:15.874984    8487 start.go:340] cluster config:
	{Name:force-systemd-flag-016000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-flag-016000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:05:15.878709    8487 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:05:15.886478    8487 out.go:177] * Starting "force-systemd-flag-016000" primary control-plane node in "force-systemd-flag-016000" cluster
	I0717 11:05:15.890408    8487 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:05:15.890424    8487 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:05:15.890438    8487 cache.go:56] Caching tarball of preloaded images
	I0717 11:05:15.890504    8487 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:05:15.890510    8487 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:05:15.890572    8487 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/force-systemd-flag-016000/config.json ...
	I0717 11:05:15.890591    8487 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/force-systemd-flag-016000/config.json: {Name:mked6bc86eef2d2bff7e01f4ef1a4c02527f4ac2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:05:15.890812    8487 start.go:360] acquireMachinesLock for force-systemd-flag-016000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:05:15.890847    8487 start.go:364] duration metric: took 28.959µs to acquireMachinesLock for "force-systemd-flag-016000"
	I0717 11:05:15.890861    8487 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-016000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-flag-016000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:05:15.890887    8487 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:05:15.899398    8487 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0717 11:05:15.917747    8487 start.go:159] libmachine.API.Create for "force-systemd-flag-016000" (driver="qemu2")
	I0717 11:05:15.917777    8487 client.go:168] LocalClient.Create starting
	I0717 11:05:15.917850    8487 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:05:15.917886    8487 main.go:141] libmachine: Decoding PEM data...
	I0717 11:05:15.917895    8487 main.go:141] libmachine: Parsing certificate...
	I0717 11:05:15.917934    8487 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:05:15.917958    8487 main.go:141] libmachine: Decoding PEM data...
	I0717 11:05:15.917967    8487 main.go:141] libmachine: Parsing certificate...
	I0717 11:05:15.918330    8487 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:05:16.058125    8487 main.go:141] libmachine: Creating SSH key...
	I0717 11:05:16.126932    8487 main.go:141] libmachine: Creating Disk image...
	I0717 11:05:16.126937    8487 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:05:16.127126    8487 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-flag-016000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-flag-016000/disk.qcow2
	I0717 11:05:16.136369    8487 main.go:141] libmachine: STDOUT: 
	I0717 11:05:16.136388    8487 main.go:141] libmachine: STDERR: 
	I0717 11:05:16.136461    8487 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-flag-016000/disk.qcow2 +20000M
	I0717 11:05:16.144249    8487 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:05:16.144272    8487 main.go:141] libmachine: STDERR: 
	I0717 11:05:16.144290    8487 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-flag-016000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-flag-016000/disk.qcow2
	I0717 11:05:16.144303    8487 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:05:16.144316    8487 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:05:16.144344    8487 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-flag-016000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-flag-016000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-flag-016000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:55:74:79:a9:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-flag-016000/disk.qcow2
	I0717 11:05:16.145977    8487 main.go:141] libmachine: STDOUT: 
	I0717 11:05:16.146004    8487 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:05:16.146022    8487 client.go:171] duration metric: took 228.240417ms to LocalClient.Create
	I0717 11:05:18.148226    8487 start.go:128] duration metric: took 2.257314875s to createHost
	I0717 11:05:18.148373    8487 start.go:83] releasing machines lock for "force-systemd-flag-016000", held for 2.257432458s
	W0717 11:05:18.148438    8487 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:05:18.177580    8487 out.go:177] * Deleting "force-systemd-flag-016000" in qemu2 ...
	W0717 11:05:18.196214    8487 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:05:18.196234    8487 start.go:729] Will try again in 5 seconds ...
	I0717 11:05:23.198402    8487 start.go:360] acquireMachinesLock for force-systemd-flag-016000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:05:23.198818    8487 start.go:364] duration metric: took 339µs to acquireMachinesLock for "force-systemd-flag-016000"
	I0717 11:05:23.198991    8487 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-016000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-flag-016000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:05:23.199233    8487 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:05:23.207543    8487 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0717 11:05:23.257210    8487 start.go:159] libmachine.API.Create for "force-systemd-flag-016000" (driver="qemu2")
	I0717 11:05:23.257271    8487 client.go:168] LocalClient.Create starting
	I0717 11:05:23.257418    8487 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:05:23.257479    8487 main.go:141] libmachine: Decoding PEM data...
	I0717 11:05:23.257497    8487 main.go:141] libmachine: Parsing certificate...
	I0717 11:05:23.257563    8487 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:05:23.257607    8487 main.go:141] libmachine: Decoding PEM data...
	I0717 11:05:23.257629    8487 main.go:141] libmachine: Parsing certificate...
	I0717 11:05:23.258744    8487 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:05:23.418911    8487 main.go:141] libmachine: Creating SSH key...
	I0717 11:05:23.464061    8487 main.go:141] libmachine: Creating Disk image...
	I0717 11:05:23.464067    8487 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:05:23.464249    8487 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-flag-016000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-flag-016000/disk.qcow2
	I0717 11:05:23.473304    8487 main.go:141] libmachine: STDOUT: 
	I0717 11:05:23.473325    8487 main.go:141] libmachine: STDERR: 
	I0717 11:05:23.473380    8487 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-flag-016000/disk.qcow2 +20000M
	I0717 11:05:23.481257    8487 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:05:23.481272    8487 main.go:141] libmachine: STDERR: 
	I0717 11:05:23.481286    8487 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-flag-016000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-flag-016000/disk.qcow2
	I0717 11:05:23.481289    8487 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:05:23.481301    8487 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:05:23.481348    8487 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-flag-016000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-flag-016000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-flag-016000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:92:58:f1:2c:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-flag-016000/disk.qcow2
	I0717 11:05:23.482973    8487 main.go:141] libmachine: STDOUT: 
	I0717 11:05:23.482989    8487 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:05:23.483002    8487 client.go:171] duration metric: took 225.72625ms to LocalClient.Create
	I0717 11:05:25.485181    8487 start.go:128] duration metric: took 2.285913875s to createHost
	I0717 11:05:25.485245    8487 start.go:83] releasing machines lock for "force-systemd-flag-016000", held for 2.286397584s
	W0717 11:05:25.485674    8487 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-016000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-016000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:05:25.497315    8487 out.go:177] 
	W0717 11:05:25.501401    8487 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:05:25.501508    8487 out.go:239] * 
	* 
	W0717 11:05:25.504081    8487 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:05:25.512231    8487 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-016000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-016000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-016000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.88825ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-016000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-016000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-016000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-17 11:05:25.607645 -0700 PDT m=+708.252824042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-016000 -n force-systemd-flag-016000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-016000 -n force-systemd-flag-016000: exit status 7 (31.862875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-016000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-016000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-016000
--- FAIL: TestForceSystemdFlag (9.94s)

                                                
                                    
x
+
TestForceSystemdEnv (10.52s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-794000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-794000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.330982167s)

                                                
                                                
-- stdout --
	* [force-systemd-env-794000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-794000" primary control-plane node in "force-systemd-env-794000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-794000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:05:10.183771    8453 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:05:10.183898    8453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:05:10.183901    8453 out.go:304] Setting ErrFile to fd 2...
	I0717 11:05:10.183903    8453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:05:10.184045    8453 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:05:10.185101    8453 out.go:298] Setting JSON to false
	I0717 11:05:10.201580    8453 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5682,"bootTime":1721233828,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 11:05:10.201652    8453 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:05:10.208142    8453 out.go:177] * [force-systemd-env-794000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:05:10.217999    8453 notify.go:220] Checking for updates...
	I0717 11:05:10.222985    8453 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 11:05:10.230890    8453 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:05:10.238802    8453 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:05:10.246957    8453 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:05:10.253881    8453 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 11:05:10.260871    8453 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0717 11:05:10.265201    8453 config.go:182] Loaded profile config "multinode-931000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:05:10.265250    8453 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:05:10.268953    8453 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:05:10.275969    8453 start.go:297] selected driver: qemu2
	I0717 11:05:10.275973    8453 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:05:10.275978    8453 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:05:10.278286    8453 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:05:10.281977    8453 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:05:10.286022    8453 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 11:05:10.286037    8453 cni.go:84] Creating CNI manager for ""
	I0717 11:05:10.286046    8453 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:05:10.286049    8453 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 11:05:10.286087    8453 start.go:340] cluster config:
	{Name:force-systemd-env-794000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-env-794000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:05:10.289694    8453 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:05:10.297848    8453 out.go:177] * Starting "force-systemd-env-794000" primary control-plane node in "force-systemd-env-794000" cluster
	I0717 11:05:10.301932    8453 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:05:10.301948    8453 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:05:10.301959    8453 cache.go:56] Caching tarball of preloaded images
	I0717 11:05:10.302021    8453 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:05:10.302026    8453 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:05:10.302085    8453 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/force-systemd-env-794000/config.json ...
	I0717 11:05:10.302098    8453 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/force-systemd-env-794000/config.json: {Name:mk9cef56e77976f0ee11c63aa6c15df7fcc98c89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:05:10.302306    8453 start.go:360] acquireMachinesLock for force-systemd-env-794000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:05:10.302344    8453 start.go:364] duration metric: took 28.458µs to acquireMachinesLock for "force-systemd-env-794000"
	I0717 11:05:10.302354    8453 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-794000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-env-794000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:05:10.302376    8453 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:05:10.309932    8453 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0717 11:05:10.326167    8453 start.go:159] libmachine.API.Create for "force-systemd-env-794000" (driver="qemu2")
	I0717 11:05:10.326197    8453 client.go:168] LocalClient.Create starting
	I0717 11:05:10.326258    8453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:05:10.326288    8453 main.go:141] libmachine: Decoding PEM data...
	I0717 11:05:10.326297    8453 main.go:141] libmachine: Parsing certificate...
	I0717 11:05:10.326332    8453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:05:10.326355    8453 main.go:141] libmachine: Decoding PEM data...
	I0717 11:05:10.326368    8453 main.go:141] libmachine: Parsing certificate...
	I0717 11:05:10.326673    8453 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:05:10.499227    8453 main.go:141] libmachine: Creating SSH key...
	I0717 11:05:10.565717    8453 main.go:141] libmachine: Creating Disk image...
	I0717 11:05:10.565732    8453 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:05:10.565967    8453 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-env-794000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-env-794000/disk.qcow2
	I0717 11:05:10.575921    8453 main.go:141] libmachine: STDOUT: 
	I0717 11:05:10.575943    8453 main.go:141] libmachine: STDERR: 
	I0717 11:05:10.576012    8453 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-env-794000/disk.qcow2 +20000M
	I0717 11:05:10.584889    8453 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:05:10.584908    8453 main.go:141] libmachine: STDERR: 
	I0717 11:05:10.584921    8453 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-env-794000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-env-794000/disk.qcow2
	I0717 11:05:10.584932    8453 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:05:10.584953    8453 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:05:10.584984    8453 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-env-794000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-env-794000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-env-794000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:9e:e1:c2:69:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-env-794000/disk.qcow2
	I0717 11:05:10.586863    8453 main.go:141] libmachine: STDOUT: 
	I0717 11:05:10.586880    8453 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:05:10.586900    8453 client.go:171] duration metric: took 260.699417ms to LocalClient.Create
	I0717 11:05:12.589158    8453 start.go:128] duration metric: took 2.286750625s to createHost
	I0717 11:05:12.589245    8453 start.go:83] releasing machines lock for "force-systemd-env-794000", held for 2.2868885s
	W0717 11:05:12.589316    8453 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:05:12.595666    8453 out.go:177] * Deleting "force-systemd-env-794000" in qemu2 ...
	W0717 11:05:12.620239    8453 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:05:12.620268    8453 start.go:729] Will try again in 5 seconds ...
	I0717 11:05:17.622484    8453 start.go:360] acquireMachinesLock for force-systemd-env-794000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:05:18.148507    8453 start.go:364] duration metric: took 525.924959ms to acquireMachinesLock for "force-systemd-env-794000"
	I0717 11:05:18.148646    8453 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-794000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-env-794000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:05:18.148930    8453 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:05:18.162564    8453 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0717 11:05:18.211507    8453 start.go:159] libmachine.API.Create for "force-systemd-env-794000" (driver="qemu2")
	I0717 11:05:18.211548    8453 client.go:168] LocalClient.Create starting
	I0717 11:05:18.211685    8453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:05:18.211758    8453 main.go:141] libmachine: Decoding PEM data...
	I0717 11:05:18.211778    8453 main.go:141] libmachine: Parsing certificate...
	I0717 11:05:18.211836    8453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:05:18.211882    8453 main.go:141] libmachine: Decoding PEM data...
	I0717 11:05:18.211895    8453 main.go:141] libmachine: Parsing certificate...
	I0717 11:05:18.212471    8453 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:05:18.362980    8453 main.go:141] libmachine: Creating SSH key...
	I0717 11:05:18.421034    8453 main.go:141] libmachine: Creating Disk image...
	I0717 11:05:18.421040    8453 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:05:18.421213    8453 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-env-794000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-env-794000/disk.qcow2
	I0717 11:05:18.430554    8453 main.go:141] libmachine: STDOUT: 
	I0717 11:05:18.430567    8453 main.go:141] libmachine: STDERR: 
	I0717 11:05:18.430615    8453 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-env-794000/disk.qcow2 +20000M
	I0717 11:05:18.438426    8453 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:05:18.438443    8453 main.go:141] libmachine: STDERR: 
	I0717 11:05:18.438453    8453 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-env-794000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-env-794000/disk.qcow2
	I0717 11:05:18.438458    8453 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:05:18.438466    8453 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:05:18.438509    8453 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-env-794000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-env-794000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-env-794000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:eb:d5:fa:11:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/force-systemd-env-794000/disk.qcow2
	I0717 11:05:18.440152    8453 main.go:141] libmachine: STDOUT: 
	I0717 11:05:18.440167    8453 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:05:18.440180    8453 client.go:171] duration metric: took 228.628125ms to LocalClient.Create
	I0717 11:05:20.440928    8453 start.go:128] duration metric: took 2.291921958s to createHost
	I0717 11:05:20.440996    8453 start.go:83] releasing machines lock for "force-systemd-env-794000", held for 2.292450709s
	W0717 11:05:20.441362    8453 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-794000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-794000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:05:20.452846    8453 out.go:177] 
	W0717 11:05:20.460866    8453 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:05:20.460902    8453 out.go:239] * 
	* 
	W0717 11:05:20.463222    8453 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:05:20.471796    8453 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-794000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-794000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-794000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.807334ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-794000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-794000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-794000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-17 11:05:20.56897 -0700 PDT m=+703.214156626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-794000 -n force-systemd-env-794000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-794000 -n force-systemd-env-794000: exit status 7 (32.986875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-794000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-794000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-794000
--- FAIL: TestForceSystemdEnv (10.52s)

                                                
                                    
x
+
TestErrorSpam/setup (9.79s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-358000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-358000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 --driver=qemu2 : exit status 80 (9.784682625s)

                                                
                                                
-- stdout --
	* [nospam-358000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-358000" primary control-plane node in "nospam-358000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-358000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-358000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-358000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-358000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-358000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19282
- KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-358000" primary control-plane node in "nospam-358000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-358000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-358000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.79s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-208000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-208000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.865048708s)

                                                
                                                
-- stdout --
	* [functional-208000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-208000" primary control-plane node in "functional-208000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-208000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51123 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51123 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51123 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-208000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-208000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-208000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19282
- KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-208000" primary control-plane node in "functional-208000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-208000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51123 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51123 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51123 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-208000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000: exit status 7 (68.72025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-208000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.94s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-208000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-208000 --alsologtostderr -v=8: exit status 80 (5.183734041s)

                                                
                                                
-- stdout --
	* [functional-208000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-208000" primary control-plane node in "functional-208000" cluster
	* Restarting existing qemu2 VM for "functional-208000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-208000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:54:43.499560    7088 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:54:43.499710    7088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:54:43.499714    7088 out.go:304] Setting ErrFile to fd 2...
	I0717 10:54:43.499716    7088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:54:43.499852    7088 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 10:54:43.500846    7088 out.go:298] Setting JSON to false
	I0717 10:54:43.516740    7088 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5055,"bootTime":1721233828,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 10:54:43.516811    7088 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:54:43.520817    7088 out.go:177] * [functional-208000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 10:54:43.527514    7088 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 10:54:43.527561    7088 notify.go:220] Checking for updates...
	I0717 10:54:43.534488    7088 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 10:54:43.537533    7088 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:54:43.540478    7088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:54:43.543429    7088 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 10:54:43.546464    7088 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:54:43.549807    7088 config.go:182] Loaded profile config "functional-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:54:43.549866    7088 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:54:43.553431    7088 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 10:54:43.560630    7088 start.go:297] selected driver: qemu2
	I0717 10:54:43.560640    7088 start.go:901] validating driver "qemu2" against &{Name:functional-208000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:functional-208000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:54:43.560702    7088 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:54:43.562848    7088 cni.go:84] Creating CNI manager for ""
	I0717 10:54:43.562865    7088 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 10:54:43.562921    7088 start.go:340] cluster config:
	{Name:functional-208000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-208000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:54:43.566369    7088 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:54:43.575482    7088 out.go:177] * Starting "functional-208000" primary control-plane node in "functional-208000" cluster
	I0717 10:54:43.579488    7088 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:54:43.579502    7088 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 10:54:43.579511    7088 cache.go:56] Caching tarball of preloaded images
	I0717 10:54:43.579565    7088 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 10:54:43.579570    7088 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:54:43.579615    7088 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/functional-208000/config.json ...
	I0717 10:54:43.580126    7088 start.go:360] acquireMachinesLock for functional-208000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:54:43.580156    7088 start.go:364] duration metric: took 23.209µs to acquireMachinesLock for "functional-208000"
	I0717 10:54:43.580164    7088 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:54:43.580172    7088 fix.go:54] fixHost starting: 
	I0717 10:54:43.580291    7088 fix.go:112] recreateIfNeeded on functional-208000: state=Stopped err=<nil>
	W0717 10:54:43.580299    7088 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:54:43.588514    7088 out.go:177] * Restarting existing qemu2 VM for "functional-208000" ...
	I0717 10:54:43.592452    7088 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:54:43.592492    7088 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:ba:30:0b:47:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/disk.qcow2
	I0717 10:54:43.594555    7088 main.go:141] libmachine: STDOUT: 
	I0717 10:54:43.594571    7088 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:54:43.594598    7088 fix.go:56] duration metric: took 14.426417ms for fixHost
	I0717 10:54:43.594603    7088 start.go:83] releasing machines lock for "functional-208000", held for 14.443791ms
	W0717 10:54:43.594609    7088 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 10:54:43.594638    7088 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:54:43.594643    7088 start.go:729] Will try again in 5 seconds ...
	I0717 10:54:48.596678    7088 start.go:360] acquireMachinesLock for functional-208000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:54:48.597133    7088 start.go:364] duration metric: took 326.125µs to acquireMachinesLock for "functional-208000"
	I0717 10:54:48.597263    7088 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:54:48.597287    7088 fix.go:54] fixHost starting: 
	I0717 10:54:48.597952    7088 fix.go:112] recreateIfNeeded on functional-208000: state=Stopped err=<nil>
	W0717 10:54:48.597977    7088 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:54:48.601470    7088 out.go:177] * Restarting existing qemu2 VM for "functional-208000" ...
	I0717 10:54:48.608335    7088 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:54:48.608524    7088 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:ba:30:0b:47:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/disk.qcow2
	I0717 10:54:48.617547    7088 main.go:141] libmachine: STDOUT: 
	I0717 10:54:48.617614    7088 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:54:48.617687    7088 fix.go:56] duration metric: took 20.405708ms for fixHost
	I0717 10:54:48.617710    7088 start.go:83] releasing machines lock for "functional-208000", held for 20.555583ms
	W0717 10:54:48.617903    7088 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-208000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-208000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:54:48.625301    7088 out.go:177] 
	W0717 10:54:48.629379    7088 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 10:54:48.629405    7088 out.go:239] * 
	* 
	W0717 10:54:48.631914    7088 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:54:48.639320    7088 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-208000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.185357042s for "functional-208000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000: exit status 7 (63.299292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-208000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (28.945041ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-208000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000: exit status 7 (30.458958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-208000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-208000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-208000 get po -A: exit status 1 (26.313166ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-208000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-208000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-208000\n"*: args "kubectl --context functional-208000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-208000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000: exit status 7 (30.168667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-208000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh sudo crictl images: exit status 83 (41.833459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-208000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (40.947334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-208000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (38.930791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (40.9105ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-208000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 kubectl -- --context functional-208000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 kubectl -- --context functional-208000 get pods: exit status 1 (700.015959ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-208000
	* no server found for cluster "functional-208000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-208000 kubectl -- --context functional-208000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000: exit status 7 (31.598667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-208000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.73s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.97s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-208000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-208000 get pods: exit status 1 (944.002833ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-208000
	* no server found for cluster "functional-208000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-208000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000: exit status 7 (29.128042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-208000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.97s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-208000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-208000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.182903416s)

                                                
                                                
-- stdout --
	* [functional-208000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-208000" primary control-plane node in "functional-208000" cluster
	* Restarting existing qemu2 VM for "functional-208000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-208000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-208000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-208000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.18341225s for "functional-208000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000: exit status 7 (67.720375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-208000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-208000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-208000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (28.754625ms)

                                                
                                                
** stderr ** 
	error: context "functional-208000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-208000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000: exit status 7 (29.518292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-208000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 logs: exit status 83 (76.670083ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-716000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT |                     |
	|         | -p download-only-716000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT | 17 Jul 24 10:53 PDT |
	| delete  | -p download-only-716000                                                  | download-only-716000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT | 17 Jul 24 10:53 PDT |
	| start   | -o=json --download-only                                                  | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT |                     |
	|         | -p download-only-580000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT | 17 Jul 24 10:53 PDT |
	| delete  | -p download-only-580000                                                  | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT | 17 Jul 24 10:53 PDT |
	| start   | -o=json --download-only                                                  | download-only-152000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT |                     |
	|         | -p download-only-152000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
	| delete  | -p download-only-152000                                                  | download-only-152000 | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
	| delete  | -p download-only-716000                                                  | download-only-716000 | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
	| delete  | -p download-only-580000                                                  | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
	| delete  | -p download-only-152000                                                  | download-only-152000 | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
	| start   | --download-only -p                                                       | binary-mirror-527000 | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
	|         | binary-mirror-527000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:51087                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-527000                                                  | binary-mirror-527000 | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
	| addons  | disable dashboard -p                                                     | addons-562000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
	|         | addons-562000                                                            |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                      | addons-562000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
	|         | addons-562000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-562000 --wait=true                                             | addons-562000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-562000                                                         | addons-562000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
	| start   | -p nospam-358000 -n=1 --memory=2250 --wait=false                         | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-358000                                                         | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
	| start   | -p functional-208000                                                     | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-208000                                                     | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-208000 cache add                                              | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-208000 cache add                                              | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-208000 cache add                                              | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-208000 cache add                                              | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
	|         | minikube-local-cache-test:functional-208000                              |                      |         |         |                     |                     |
	| cache   | functional-208000 cache delete                                           | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
	|         | minikube-local-cache-test:functional-208000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
	| ssh     | functional-208000 ssh sudo                                               | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-208000                                                        | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-208000 ssh                                                    | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-208000 cache reload                                           | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
	| ssh     | functional-208000 ssh                                                    | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-208000 kubectl --                                             | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
	|         | --context functional-208000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-208000                                                     | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 10:54:53
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 10:54:53.668880    7162 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:54:53.669002    7162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:54:53.669004    7162 out.go:304] Setting ErrFile to fd 2...
	I0717 10:54:53.669005    7162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:54:53.669144    7162 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 10:54:53.670109    7162 out.go:298] Setting JSON to false
	I0717 10:54:53.686045    7162 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5065,"bootTime":1721233828,"procs":438,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 10:54:53.686112    7162 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:54:53.691655    7162 out.go:177] * [functional-208000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 10:54:53.698575    7162 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 10:54:53.698613    7162 notify.go:220] Checking for updates...
	I0717 10:54:53.706511    7162 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 10:54:53.710550    7162 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:54:53.713490    7162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:54:53.716581    7162 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 10:54:53.719541    7162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:54:53.721173    7162 config.go:182] Loaded profile config "functional-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:54:53.721223    7162 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:54:53.725552    7162 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 10:54:53.732401    7162 start.go:297] selected driver: qemu2
	I0717 10:54:53.732405    7162 start.go:901] validating driver "qemu2" against &{Name:functional-208000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:functional-208000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:54:53.732464    7162 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:54:53.734761    7162 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:54:53.734781    7162 cni.go:84] Creating CNI manager for ""
	I0717 10:54:53.734788    7162 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 10:54:53.734828    7162 start.go:340] cluster config:
	{Name:functional-208000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-208000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:54:53.738350    7162 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:54:53.748528    7162 out.go:177] * Starting "functional-208000" primary control-plane node in "functional-208000" cluster
	I0717 10:54:53.754609    7162 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:54:53.754622    7162 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 10:54:53.754631    7162 cache.go:56] Caching tarball of preloaded images
	I0717 10:54:53.754692    7162 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 10:54:53.754696    7162 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:54:53.754754    7162 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/functional-208000/config.json ...
	I0717 10:54:53.755052    7162 start.go:360] acquireMachinesLock for functional-208000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:54:53.755088    7162 start.go:364] duration metric: took 30.667µs to acquireMachinesLock for "functional-208000"
	I0717 10:54:53.755095    7162 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:54:53.755101    7162 fix.go:54] fixHost starting: 
	I0717 10:54:53.755213    7162 fix.go:112] recreateIfNeeded on functional-208000: state=Stopped err=<nil>
	W0717 10:54:53.755220    7162 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:54:53.759584    7162 out.go:177] * Restarting existing qemu2 VM for "functional-208000" ...
	I0717 10:54:53.764519    7162 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:54:53.764565    7162 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:ba:30:0b:47:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/disk.qcow2
	I0717 10:54:53.766466    7162 main.go:141] libmachine: STDOUT: 
	I0717 10:54:53.766481    7162 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:54:53.766513    7162 fix.go:56] duration metric: took 11.412792ms for fixHost
	I0717 10:54:53.766517    7162 start.go:83] releasing machines lock for "functional-208000", held for 11.426667ms
	W0717 10:54:53.766521    7162 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 10:54:53.766554    7162 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:54:53.766559    7162 start.go:729] Will try again in 5 seconds ...
	I0717 10:54:58.767975    7162 start.go:360] acquireMachinesLock for functional-208000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:54:58.768421    7162 start.go:364] duration metric: took 355.667µs to acquireMachinesLock for "functional-208000"
	I0717 10:54:58.768553    7162 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:54:58.768566    7162 fix.go:54] fixHost starting: 
	I0717 10:54:58.769277    7162 fix.go:112] recreateIfNeeded on functional-208000: state=Stopped err=<nil>
	W0717 10:54:58.769296    7162 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:54:58.773019    7162 out.go:177] * Restarting existing qemu2 VM for "functional-208000" ...
	I0717 10:54:58.779780    7162 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:54:58.780145    7162 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:ba:30:0b:47:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/disk.qcow2
	I0717 10:54:58.789686    7162 main.go:141] libmachine: STDOUT: 
	I0717 10:54:58.789742    7162 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:54:58.789824    7162 fix.go:56] duration metric: took 21.258916ms for fixHost
	I0717 10:54:58.789841    7162 start.go:83] releasing machines lock for "functional-208000", held for 21.405208ms
	W0717 10:54:58.790020    7162 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-208000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:54:58.797782    7162 out.go:177] 
	W0717 10:54:58.801803    7162 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 10:54:58.801823    7162 out.go:239] * 
	W0717 10:54:58.804314    7162 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:54:58.811629    7162 out.go:177] 
	
	
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-208000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-716000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT |                     |
|         | -p download-only-716000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT | 17 Jul 24 10:53 PDT |
| delete  | -p download-only-716000                                                  | download-only-716000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT | 17 Jul 24 10:53 PDT |
| start   | -o=json --download-only                                                  | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT |                     |
|         | -p download-only-580000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT | 17 Jul 24 10:53 PDT |
| delete  | -p download-only-580000                                                  | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT | 17 Jul 24 10:53 PDT |
| start   | -o=json --download-only                                                  | download-only-152000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT |                     |
|         | -p download-only-152000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
| delete  | -p download-only-152000                                                  | download-only-152000 | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
| delete  | -p download-only-716000                                                  | download-only-716000 | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
| delete  | -p download-only-580000                                                  | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
| delete  | -p download-only-152000                                                  | download-only-152000 | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
| start   | --download-only -p                                                       | binary-mirror-527000 | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | binary-mirror-527000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51087                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-527000                                                  | binary-mirror-527000 | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
| addons  | disable dashboard -p                                                     | addons-562000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | addons-562000                                                            |                      |         |         |                     |                     |
| addons  | enable dashboard -p                                                      | addons-562000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | addons-562000                                                            |                      |         |         |                     |                     |
| start   | -p addons-562000 --wait=true                                             | addons-562000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-562000                                                         | addons-562000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
| start   | -p nospam-358000 -n=1 --memory=2250 --wait=false                         | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-358000                                                         | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
| start   | -p functional-208000                                                     | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-208000                                                     | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-208000 cache add                                              | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-208000 cache add                                              | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-208000 cache add                                              | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-208000 cache add                                              | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
|         | minikube-local-cache-test:functional-208000                              |                      |         |         |                     |                     |
| cache   | functional-208000 cache delete                                           | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
|         | minikube-local-cache-test:functional-208000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
| ssh     | functional-208000 ssh sudo                                               | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-208000                                                        | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-208000 ssh                                                    | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-208000 cache reload                                           | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
| ssh     | functional-208000 ssh                                                    | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-208000 kubectl --                                             | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | --context functional-208000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-208000                                                     | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/17 10:54:53
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0717 10:54:53.668880    7162 out.go:291] Setting OutFile to fd 1 ...
I0717 10:54:53.669002    7162 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:54:53.669004    7162 out.go:304] Setting ErrFile to fd 2...
I0717 10:54:53.669005    7162 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:54:53.669144    7162 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
I0717 10:54:53.670109    7162 out.go:298] Setting JSON to false
I0717 10:54:53.686045    7162 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5065,"bootTime":1721233828,"procs":438,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0717 10:54:53.686112    7162 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0717 10:54:53.691655    7162 out.go:177] * [functional-208000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0717 10:54:53.698575    7162 out.go:177]   - MINIKUBE_LOCATION=19282
I0717 10:54:53.698613    7162 notify.go:220] Checking for updates...
I0717 10:54:53.706511    7162 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
I0717 10:54:53.710550    7162 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0717 10:54:53.713490    7162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0717 10:54:53.716581    7162 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
I0717 10:54:53.719541    7162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0717 10:54:53.721173    7162 config.go:182] Loaded profile config "functional-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:54:53.721223    7162 driver.go:392] Setting default libvirt URI to qemu:///system
I0717 10:54:53.725552    7162 out.go:177] * Using the qemu2 driver based on existing profile
I0717 10:54:53.732401    7162 start.go:297] selected driver: qemu2
I0717 10:54:53.732405    7162 start.go:901] validating driver "qemu2" against &{Name:functional-208000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:functional-208000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 10:54:53.732464    7162 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0717 10:54:53.734761    7162 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0717 10:54:53.734781    7162 cni.go:84] Creating CNI manager for ""
I0717 10:54:53.734788    7162 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0717 10:54:53.734828    7162 start.go:340] cluster config:
{Name:functional-208000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-208000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 10:54:53.738350    7162 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0717 10:54:53.748528    7162 out.go:177] * Starting "functional-208000" primary control-plane node in "functional-208000" cluster
I0717 10:54:53.754609    7162 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0717 10:54:53.754622    7162 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
I0717 10:54:53.754631    7162 cache.go:56] Caching tarball of preloaded images
I0717 10:54:53.754692    7162 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0717 10:54:53.754696    7162 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
I0717 10:54:53.754754    7162 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/functional-208000/config.json ...
I0717 10:54:53.755052    7162 start.go:360] acquireMachinesLock for functional-208000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0717 10:54:53.755088    7162 start.go:364] duration metric: took 30.667µs to acquireMachinesLock for "functional-208000"
I0717 10:54:53.755095    7162 start.go:96] Skipping create...Using existing machine configuration
I0717 10:54:53.755101    7162 fix.go:54] fixHost starting: 
I0717 10:54:53.755213    7162 fix.go:112] recreateIfNeeded on functional-208000: state=Stopped err=<nil>
W0717 10:54:53.755220    7162 fix.go:138] unexpected machine state, will restart: <nil>
I0717 10:54:53.759584    7162 out.go:177] * Restarting existing qemu2 VM for "functional-208000" ...
I0717 10:54:53.764519    7162 qemu.go:418] Using hvf for hardware acceleration
I0717 10:54:53.764565    7162 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:ba:30:0b:47:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/disk.qcow2
I0717 10:54:53.766466    7162 main.go:141] libmachine: STDOUT: 
I0717 10:54:53.766481    7162 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0717 10:54:53.766513    7162 fix.go:56] duration metric: took 11.412792ms for fixHost
I0717 10:54:53.766517    7162 start.go:83] releasing machines lock for "functional-208000", held for 11.426667ms
W0717 10:54:53.766521    7162 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0717 10:54:53.766554    7162 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0717 10:54:53.766559    7162 start.go:729] Will try again in 5 seconds ...
I0717 10:54:58.767975    7162 start.go:360] acquireMachinesLock for functional-208000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0717 10:54:58.768421    7162 start.go:364] duration metric: took 355.667µs to acquireMachinesLock for "functional-208000"
I0717 10:54:58.768553    7162 start.go:96] Skipping create...Using existing machine configuration
I0717 10:54:58.768566    7162 fix.go:54] fixHost starting: 
I0717 10:54:58.769277    7162 fix.go:112] recreateIfNeeded on functional-208000: state=Stopped err=<nil>
W0717 10:54:58.769296    7162 fix.go:138] unexpected machine state, will restart: <nil>
I0717 10:54:58.773019    7162 out.go:177] * Restarting existing qemu2 VM for "functional-208000" ...
I0717 10:54:58.779780    7162 qemu.go:418] Using hvf for hardware acceleration
I0717 10:54:58.780145    7162 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:ba:30:0b:47:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/disk.qcow2
I0717 10:54:58.789686    7162 main.go:141] libmachine: STDOUT: 
I0717 10:54:58.789742    7162 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0717 10:54:58.789824    7162 fix.go:56] duration metric: took 21.258916ms for fixHost
I0717 10:54:58.789841    7162 start.go:83] releasing machines lock for "functional-208000", held for 21.405208ms
W0717 10:54:58.790020    7162 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-208000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0717 10:54:58.797782    7162 out.go:177] 
W0717 10:54:58.801803    7162 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0717 10:54:58.801823    7162 out.go:239] * 
W0717 10:54:58.804314    7162 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0717 10:54:58.811629    7162 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-208000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-208000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd325862469/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-716000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT |                     |
|         | -p download-only-716000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT | 17 Jul 24 10:53 PDT |
| delete  | -p download-only-716000                                                  | download-only-716000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT | 17 Jul 24 10:53 PDT |
| start   | -o=json --download-only                                                  | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT |                     |
|         | -p download-only-580000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT | 17 Jul 24 10:53 PDT |
| delete  | -p download-only-580000                                                  | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT | 17 Jul 24 10:53 PDT |
| start   | -o=json --download-only                                                  | download-only-152000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT |                     |
|         | -p download-only-152000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
| delete  | -p download-only-152000                                                  | download-only-152000 | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
| delete  | -p download-only-716000                                                  | download-only-716000 | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
| delete  | -p download-only-580000                                                  | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
| delete  | -p download-only-152000                                                  | download-only-152000 | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
| start   | --download-only -p                                                       | binary-mirror-527000 | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | binary-mirror-527000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51087                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-527000                                                  | binary-mirror-527000 | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
| addons  | disable dashboard -p                                                     | addons-562000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | addons-562000                                                            |                      |         |         |                     |                     |
| addons  | enable dashboard -p                                                      | addons-562000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | addons-562000                                                            |                      |         |         |                     |                     |
| start   | -p addons-562000 --wait=true                                             | addons-562000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-562000                                                         | addons-562000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
| start   | -p nospam-358000 -n=1 --memory=2250 --wait=false                         | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-358000 --log_dir                                                  | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-358000                                                         | nospam-358000        | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
| start   | -p functional-208000                                                     | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-208000                                                     | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-208000 cache add                                              | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-208000 cache add                                              | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-208000 cache add                                              | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-208000 cache add                                              | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
|         | minikube-local-cache-test:functional-208000                              |                      |         |         |                     |                     |
| cache   | functional-208000 cache delete                                           | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
|         | minikube-local-cache-test:functional-208000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
| ssh     | functional-208000 ssh sudo                                               | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-208000                                                        | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-208000 ssh                                                    | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-208000 cache reload                                           | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
| ssh     | functional-208000 ssh                                                    | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT | 17 Jul 24 10:54 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-208000 kubectl --                                             | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | --context functional-208000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-208000                                                     | functional-208000    | jenkins | v1.33.1 | 17 Jul 24 10:54 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/17 10:54:53
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0717 10:54:53.668880    7162 out.go:291] Setting OutFile to fd 1 ...
I0717 10:54:53.669002    7162 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:54:53.669004    7162 out.go:304] Setting ErrFile to fd 2...
I0717 10:54:53.669005    7162 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:54:53.669144    7162 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
I0717 10:54:53.670109    7162 out.go:298] Setting JSON to false
I0717 10:54:53.686045    7162 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5065,"bootTime":1721233828,"procs":438,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0717 10:54:53.686112    7162 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0717 10:54:53.691655    7162 out.go:177] * [functional-208000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0717 10:54:53.698575    7162 out.go:177]   - MINIKUBE_LOCATION=19282
I0717 10:54:53.698613    7162 notify.go:220] Checking for updates...
I0717 10:54:53.706511    7162 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
I0717 10:54:53.710550    7162 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0717 10:54:53.713490    7162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0717 10:54:53.716581    7162 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
I0717 10:54:53.719541    7162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0717 10:54:53.721173    7162 config.go:182] Loaded profile config "functional-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:54:53.721223    7162 driver.go:392] Setting default libvirt URI to qemu:///system
I0717 10:54:53.725552    7162 out.go:177] * Using the qemu2 driver based on existing profile
I0717 10:54:53.732401    7162 start.go:297] selected driver: qemu2
I0717 10:54:53.732405    7162 start.go:901] validating driver "qemu2" against &{Name:functional-208000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:functional-208000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 10:54:53.732464    7162 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0717 10:54:53.734761    7162 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0717 10:54:53.734781    7162 cni.go:84] Creating CNI manager for ""
I0717 10:54:53.734788    7162 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0717 10:54:53.734828    7162 start.go:340] cluster config:
{Name:functional-208000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-208000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 10:54:53.738350    7162 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0717 10:54:53.748528    7162 out.go:177] * Starting "functional-208000" primary control-plane node in "functional-208000" cluster
I0717 10:54:53.754609    7162 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0717 10:54:53.754622    7162 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
I0717 10:54:53.754631    7162 cache.go:56] Caching tarball of preloaded images
I0717 10:54:53.754692    7162 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0717 10:54:53.754696    7162 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
I0717 10:54:53.754754    7162 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/functional-208000/config.json ...
I0717 10:54:53.755052    7162 start.go:360] acquireMachinesLock for functional-208000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0717 10:54:53.755088    7162 start.go:364] duration metric: took 30.667µs to acquireMachinesLock for "functional-208000"
I0717 10:54:53.755095    7162 start.go:96] Skipping create...Using existing machine configuration
I0717 10:54:53.755101    7162 fix.go:54] fixHost starting: 
I0717 10:54:53.755213    7162 fix.go:112] recreateIfNeeded on functional-208000: state=Stopped err=<nil>
W0717 10:54:53.755220    7162 fix.go:138] unexpected machine state, will restart: <nil>
I0717 10:54:53.759584    7162 out.go:177] * Restarting existing qemu2 VM for "functional-208000" ...
I0717 10:54:53.764519    7162 qemu.go:418] Using hvf for hardware acceleration
I0717 10:54:53.764565    7162 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:ba:30:0b:47:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/disk.qcow2
I0717 10:54:53.766466    7162 main.go:141] libmachine: STDOUT: 
I0717 10:54:53.766481    7162 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0717 10:54:53.766513    7162 fix.go:56] duration metric: took 11.412792ms for fixHost
I0717 10:54:53.766517    7162 start.go:83] releasing machines lock for "functional-208000", held for 11.426667ms
W0717 10:54:53.766521    7162 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0717 10:54:53.766554    7162 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0717 10:54:53.766559    7162 start.go:729] Will try again in 5 seconds ...
I0717 10:54:58.767975    7162 start.go:360] acquireMachinesLock for functional-208000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0717 10:54:58.768421    7162 start.go:364] duration metric: took 355.667µs to acquireMachinesLock for "functional-208000"
I0717 10:54:58.768553    7162 start.go:96] Skipping create...Using existing machine configuration
I0717 10:54:58.768566    7162 fix.go:54] fixHost starting: 
I0717 10:54:58.769277    7162 fix.go:112] recreateIfNeeded on functional-208000: state=Stopped err=<nil>
W0717 10:54:58.769296    7162 fix.go:138] unexpected machine state, will restart: <nil>
I0717 10:54:58.773019    7162 out.go:177] * Restarting existing qemu2 VM for "functional-208000" ...
I0717 10:54:58.779780    7162 qemu.go:418] Using hvf for hardware acceleration
I0717 10:54:58.780145    7162 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:ba:30:0b:47:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/functional-208000/disk.qcow2
I0717 10:54:58.789686    7162 main.go:141] libmachine: STDOUT: 
I0717 10:54:58.789742    7162 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0717 10:54:58.789824    7162 fix.go:56] duration metric: took 21.258916ms for fixHost
I0717 10:54:58.789841    7162 start.go:83] releasing machines lock for "functional-208000", held for 21.405208ms
W0717 10:54:58.790020    7162 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-208000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0717 10:54:58.797782    7162 out.go:177] 
W0717 10:54:58.801803    7162 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0717 10:54:58.801823    7162 out.go:239] * 
W0717 10:54:58.804314    7162 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0717 10:54:58.811629    7162 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.06s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-208000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-208000 apply -f testdata/invalidsvc.yaml: exit status 1 (26.695167ms)

                                                
                                                
** stderr ** 
	error: context "functional-208000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-208000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-208000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-208000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-208000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-208000 --alsologtostderr -v=1] stderr:
I0717 10:55:39.095602    7475 out.go:291] Setting OutFile to fd 1 ...
I0717 10:55:39.096000    7475 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:55:39.096004    7475 out.go:304] Setting ErrFile to fd 2...
I0717 10:55:39.096007    7475 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:55:39.096186    7475 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
I0717 10:55:39.096452    7475 mustload.go:65] Loading cluster: functional-208000
I0717 10:55:39.096630    7475 config.go:182] Loaded profile config "functional-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:55:39.100906    7475 out.go:177] * The control-plane node functional-208000 host is not running: state=Stopped
I0717 10:55:39.106821    7475 out.go:177]   To start a cluster, run: "minikube start -p functional-208000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000: exit status 7 (42.745458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-208000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 status: exit status 7 (29.157083ms)

                                                
                                                
-- stdout --
	functional-208000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-208000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (29.049375ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-208000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 status -o json: exit status 7 (28.731583ms)

                                                
                                                
-- stdout --
	{"Name":"functional-208000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-208000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000: exit status 7 (28.891625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-208000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-208000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-208000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.196459ms)

                                                
                                                
** stderr ** 
	error: context "functional-208000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-208000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-208000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-208000 describe po hello-node-connect: exit status 1 (25.724917ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-208000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-208000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-208000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-208000 logs -l app=hello-node-connect: exit status 1 (25.724334ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-208000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-208000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-208000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-208000 describe svc hello-node-connect: exit status 1 (25.672709ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-208000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-208000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000: exit status 7 (29.178625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-208000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-208000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000: exit status 7 (30.964834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-208000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "echo hello": exit status 83 (41.642334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-208000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-208000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-208000\"\n"*. args "out/minikube-darwin-arm64 -p functional-208000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "cat /etc/hostname": exit status 83 (40.751ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-208000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-208000"- but got *"* The control-plane node functional-208000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-208000\"\n"*. args "out/minikube-darwin-arm64 -p functional-208000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000: exit status 7 (30.579042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-208000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (54.441291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-208000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh -n functional-208000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh -n functional-208000 "sudo cat /home/docker/cp-test.txt": exit status 83 (44.016125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-208000 ssh -n functional-208000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-208000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-208000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 cp functional-208000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd385554789/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 cp functional-208000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd385554789/001/cp-test.txt: exit status 83 (42.728875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-208000 cp functional-208000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd385554789/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh -n functional-208000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh -n functional-208000 "sudo cat /home/docker/cp-test.txt": exit status 83 (39.855542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-208000 ssh -n functional-208000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd385554789/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-208000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-208000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (46.723791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-208000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh -n functional-208000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh -n functional-208000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (40.008583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-208000 ssh -n functional-208000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-208000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-208000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/6820/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "sudo cat /etc/test/nested/copy/6820/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "sudo cat /etc/test/nested/copy/6820/hosts": exit status 83 (38.727875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-208000 ssh "sudo cat /etc/test/nested/copy/6820/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-208000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-208000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-208000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-208000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000: exit status 7 (29.872375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-208000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/6820.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "sudo cat /etc/ssl/certs/6820.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "sudo cat /etc/ssl/certs/6820.pem": exit status 83 (45.472875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/6820.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-208000 ssh \"sudo cat /etc/ssl/certs/6820.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/6820.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-208000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-208000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/6820.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "sudo cat /usr/share/ca-certificates/6820.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "sudo cat /usr/share/ca-certificates/6820.pem": exit status 83 (39.70025ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/6820.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-208000 ssh \"sudo cat /usr/share/ca-certificates/6820.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/6820.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-208000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-208000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (40.791334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-208000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-208000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-208000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/68202.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "sudo cat /etc/ssl/certs/68202.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "sudo cat /etc/ssl/certs/68202.pem": exit status 83 (41.56175ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/68202.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-208000 ssh \"sudo cat /etc/ssl/certs/68202.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/68202.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-208000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-208000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/68202.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "sudo cat /usr/share/ca-certificates/68202.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "sudo cat /usr/share/ca-certificates/68202.pem": exit status 83 (40.868334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/68202.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-208000 ssh \"sudo cat /usr/share/ca-certificates/68202.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/68202.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-208000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-208000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (38.552875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-208000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-208000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-208000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000: exit status 7 (29.898959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-208000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-208000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-208000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.676791ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-208000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-208000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-208000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-208000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-208000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-208000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-208000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-208000 -n functional-208000: exit status 7 (30.032667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-208000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "sudo systemctl is-active crio": exit status 83 (40.506834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-208000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-208000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 version -o=json --components: exit status 83 (40.93775ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-208000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-208000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-208000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-208000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-208000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-208000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-208000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-208000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-208000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-208000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-208000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-208000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-208000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-208000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-208000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-208000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-208000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-208000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-208000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-208000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-208000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-208000 image ls --format short --alsologtostderr:
I0717 10:55:39.499605    7490 out.go:291] Setting OutFile to fd 1 ...
I0717 10:55:39.499769    7490 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:55:39.499772    7490 out.go:304] Setting ErrFile to fd 2...
I0717 10:55:39.499774    7490 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:55:39.499917    7490 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
I0717 10:55:39.500353    7490 config.go:182] Loaded profile config "functional-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:55:39.500422    7490 config.go:182] Loaded profile config "functional-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-208000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-208000 image ls --format table --alsologtostderr:
I0717 10:55:39.569883    7494 out.go:291] Setting OutFile to fd 1 ...
I0717 10:55:39.570024    7494 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:55:39.570027    7494 out.go:304] Setting ErrFile to fd 2...
I0717 10:55:39.570029    7494 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:55:39.570167    7494 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
I0717 10:55:39.570589    7494 config.go:182] Loaded profile config "functional-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:55:39.570651    7494 config.go:182] Loaded profile config "functional-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-208000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-208000 image ls --format json --alsologtostderr:
I0717 10:55:39.534786    7492 out.go:291] Setting OutFile to fd 1 ...
I0717 10:55:39.534931    7492 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:55:39.534934    7492 out.go:304] Setting ErrFile to fd 2...
I0717 10:55:39.534936    7492 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:55:39.535082    7492 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
I0717 10:55:39.535502    7492 config.go:182] Loaded profile config "functional-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:55:39.535564    7492 config.go:182] Loaded profile config "functional-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-208000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-208000 image ls --format yaml --alsologtostderr:
I0717 10:55:39.605530    7496 out.go:291] Setting OutFile to fd 1 ...
I0717 10:55:39.605688    7496 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:55:39.605691    7496 out.go:304] Setting ErrFile to fd 2...
I0717 10:55:39.605693    7496 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:55:39.605810    7496 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
I0717 10:55:39.606205    7496 config.go:182] Loaded profile config "functional-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:55:39.606263    7496 config.go:182] Loaded profile config "functional-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh pgrep buildkitd: exit status 83 (40.870458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 image build -t localhost/my-image:functional-208000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-208000 image build -t localhost/my-image:functional-208000 testdata/build --alsologtostderr:
I0717 10:55:39.681938    7500 out.go:291] Setting OutFile to fd 1 ...
I0717 10:55:39.682344    7500 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:55:39.682348    7500 out.go:304] Setting ErrFile to fd 2...
I0717 10:55:39.682350    7500 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:55:39.682530    7500 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
I0717 10:55:39.682950    7500 config.go:182] Loaded profile config "functional-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:55:39.683448    7500 config.go:182] Loaded profile config "functional-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:55:39.683670    7500 build_images.go:133] succeeded building to: 
I0717 10:55:39.683673    7500 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 image ls
functional_test.go:442: expected "localhost/my-image:functional-208000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-208000 docker-env) && out/minikube-darwin-arm64 status -p functional-208000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-208000 docker-env) && out/minikube-darwin-arm64 status -p functional-208000": exit status 1 (48.593208ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 update-context --alsologtostderr -v=2: exit status 83 (41.795125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:55:39.372982    7484 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:55:39.373945    7484 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:39.373949    7484 out.go:304] Setting ErrFile to fd 2...
	I0717 10:55:39.373951    7484 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:39.374096    7484 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 10:55:39.374322    7484 mustload.go:65] Loading cluster: functional-208000
	I0717 10:55:39.374515    7484 config.go:182] Loaded profile config "functional-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:55:39.378247    7484 out.go:177] * The control-plane node functional-208000 host is not running: state=Stopped
	I0717 10:55:39.382222    7484 out.go:177]   To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-208000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-208000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-208000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 update-context --alsologtostderr -v=2: exit status 83 (41.587917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:55:39.457485    7488 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:55:39.457632    7488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:39.457636    7488 out.go:304] Setting ErrFile to fd 2...
	I0717 10:55:39.457639    7488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:39.457751    7488 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 10:55:39.457977    7488 mustload.go:65] Loading cluster: functional-208000
	I0717 10:55:39.458152    7488 config.go:182] Loaded profile config "functional-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:55:39.462240    7488 out.go:177] * The control-plane node functional-208000 host is not running: state=Stopped
	I0717 10:55:39.466248    7488 out.go:177]   To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-208000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-208000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-208000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 update-context --alsologtostderr -v=2: exit status 83 (41.642792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:55:39.415799    7486 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:55:39.415991    7486 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:39.415994    7486 out.go:304] Setting ErrFile to fd 2...
	I0717 10:55:39.415996    7486 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:39.416120    7486 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 10:55:39.416356    7486 mustload.go:65] Loading cluster: functional-208000
	I0717 10:55:39.416556    7486 config.go:182] Loaded profile config "functional-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:55:39.420316    7486 out.go:177] * The control-plane node functional-208000 host is not running: state=Stopped
	I0717 10:55:39.424233    7486 out.go:177]   To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-208000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-208000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-208000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-208000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-208000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.225709ms)

                                                
                                                
** stderr ** 
	error: context "functional-208000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-208000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 service list: exit status 83 (45.906292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-208000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-208000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-208000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 service list -o json: exit status 83 (41.875334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-208000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 service --namespace=default --https --url hello-node: exit status 83 (42.821458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-208000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 service hello-node --url --format={{.IP}}: exit status 83 (46.817875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-208000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-208000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-208000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 service hello-node --url: exit status 83 (42.811875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-208000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-208000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-208000"
functional_test.go:1565: failed to parse "* The control-plane node functional-208000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-208000\"": parse "* The control-plane node functional-208000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-208000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-208000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-208000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0717 10:55:00.533967    7285 out.go:291] Setting OutFile to fd 1 ...
I0717 10:55:00.534137    7285 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:55:00.534141    7285 out.go:304] Setting ErrFile to fd 2...
I0717 10:55:00.534143    7285 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:55:00.534290    7285 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
I0717 10:55:00.534499    7285 mustload.go:65] Loading cluster: functional-208000
I0717 10:55:00.534694    7285 config.go:182] Loaded profile config "functional-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:55:00.539780    7285 out.go:177] * The control-plane node functional-208000 host is not running: state=Stopped
I0717 10:55:00.547634    7285 out.go:177]   To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
stdout: * The control-plane node functional-208000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-208000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-208000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7284: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-208000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-208000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-208000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-208000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-208000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-208000": client config: context "functional-208000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (91.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-208000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-208000 get svc nginx-svc: exit status 1 (66.530708ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-208000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-208000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (91.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 image load --daemon docker.io/kicbase/echo-server:functional-208000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-208000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 image load --daemon docker.io/kicbase/echo-server:functional-208000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-208000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-208000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 image load --daemon docker.io/kicbase/echo-server:functional-208000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-208000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 image save docker.io/kicbase/echo-server:functional-208000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-208000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.02595275s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 16 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (37.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (37.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-488000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-488000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.977028333s)

                                                
                                                
-- stdout --
	* [ha-488000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-488000" primary control-plane node in "ha-488000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-488000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:57:34.862363    7557 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:57:34.862495    7557 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:57:34.862499    7557 out.go:304] Setting ErrFile to fd 2...
	I0717 10:57:34.862501    7557 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:57:34.862631    7557 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 10:57:34.863648    7557 out.go:298] Setting JSON to false
	I0717 10:57:34.879722    7557 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5226,"bootTime":1721233828,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 10:57:34.879791    7557 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:57:34.885300    7557 out.go:177] * [ha-488000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 10:57:34.892448    7557 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 10:57:34.892558    7557 notify.go:220] Checking for updates...
	I0717 10:57:34.898387    7557 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 10:57:34.901456    7557 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:57:34.902884    7557 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:57:34.906396    7557 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 10:57:34.909431    7557 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:57:34.912642    7557 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:57:34.916378    7557 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 10:57:34.923451    7557 start.go:297] selected driver: qemu2
	I0717 10:57:34.923460    7557 start.go:901] validating driver "qemu2" against <nil>
	I0717 10:57:34.923468    7557 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:57:34.925709    7557 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 10:57:34.928469    7557 out.go:177] * Automatically selected the socket_vmnet network
	I0717 10:57:34.931512    7557 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:57:34.931538    7557 cni.go:84] Creating CNI manager for ""
	I0717 10:57:34.931551    7557 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0717 10:57:34.931566    7557 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 10:57:34.931605    7557 start.go:340] cluster config:
	{Name:ha-488000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-488000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:57:34.935279    7557 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:57:34.943369    7557 out.go:177] * Starting "ha-488000" primary control-plane node in "ha-488000" cluster
	I0717 10:57:34.947398    7557 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:57:34.947419    7557 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 10:57:34.947427    7557 cache.go:56] Caching tarball of preloaded images
	I0717 10:57:34.947485    7557 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 10:57:34.947491    7557 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:57:34.947679    7557 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/ha-488000/config.json ...
	I0717 10:57:34.947694    7557 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/ha-488000/config.json: {Name:mke0622fa794bef9b6f8578456c11ce70ff172d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:57:34.948057    7557 start.go:360] acquireMachinesLock for ha-488000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:57:34.948090    7557 start.go:364] duration metric: took 27.084µs to acquireMachinesLock for "ha-488000"
	I0717 10:57:34.948099    7557 start.go:93] Provisioning new machine with config: &{Name:ha-488000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.2 ClusterName:ha-488000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:57:34.948139    7557 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 10:57:34.952417    7557 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 10:57:34.969030    7557 start.go:159] libmachine.API.Create for "ha-488000" (driver="qemu2")
	I0717 10:57:34.969060    7557 client.go:168] LocalClient.Create starting
	I0717 10:57:34.969124    7557 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 10:57:34.969152    7557 main.go:141] libmachine: Decoding PEM data...
	I0717 10:57:34.969160    7557 main.go:141] libmachine: Parsing certificate...
	I0717 10:57:34.969196    7557 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 10:57:34.969220    7557 main.go:141] libmachine: Decoding PEM data...
	I0717 10:57:34.969228    7557 main.go:141] libmachine: Parsing certificate...
	I0717 10:57:34.969631    7557 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 10:57:35.107278    7557 main.go:141] libmachine: Creating SSH key...
	I0717 10:57:35.219326    7557 main.go:141] libmachine: Creating Disk image...
	I0717 10:57:35.219330    7557 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 10:57:35.219520    7557 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/disk.qcow2
	I0717 10:57:35.228886    7557 main.go:141] libmachine: STDOUT: 
	I0717 10:57:35.228906    7557 main.go:141] libmachine: STDERR: 
	I0717 10:57:35.228959    7557 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/disk.qcow2 +20000M
	I0717 10:57:35.236887    7557 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 10:57:35.236901    7557 main.go:141] libmachine: STDERR: 
	I0717 10:57:35.236917    7557 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/disk.qcow2
	I0717 10:57:35.236920    7557 main.go:141] libmachine: Starting QEMU VM...
	I0717 10:57:35.236930    7557 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:57:35.236962    7557 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:c8:4f:6f:4e:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/disk.qcow2
	I0717 10:57:35.238541    7557 main.go:141] libmachine: STDOUT: 
	I0717 10:57:35.238558    7557 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:57:35.238575    7557 client.go:171] duration metric: took 269.510375ms to LocalClient.Create
	I0717 10:57:37.240749    7557 start.go:128] duration metric: took 2.2925845s to createHost
	I0717 10:57:37.240801    7557 start.go:83] releasing machines lock for "ha-488000", held for 2.29269675s
	W0717 10:57:37.240873    7557 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:57:37.255147    7557 out.go:177] * Deleting "ha-488000" in qemu2 ...
	W0717 10:57:37.281713    7557 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:57:37.281742    7557 start.go:729] Will try again in 5 seconds ...
	I0717 10:57:42.284041    7557 start.go:360] acquireMachinesLock for ha-488000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:57:42.284478    7557 start.go:364] duration metric: took 330.583µs to acquireMachinesLock for "ha-488000"
	I0717 10:57:42.284602    7557 start.go:93] Provisioning new machine with config: &{Name:ha-488000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.2 ClusterName:ha-488000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:57:42.284889    7557 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 10:57:42.289575    7557 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 10:57:42.338867    7557 start.go:159] libmachine.API.Create for "ha-488000" (driver="qemu2")
	I0717 10:57:42.338909    7557 client.go:168] LocalClient.Create starting
	I0717 10:57:42.339012    7557 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 10:57:42.339088    7557 main.go:141] libmachine: Decoding PEM data...
	I0717 10:57:42.339104    7557 main.go:141] libmachine: Parsing certificate...
	I0717 10:57:42.339176    7557 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 10:57:42.339224    7557 main.go:141] libmachine: Decoding PEM data...
	I0717 10:57:42.339240    7557 main.go:141] libmachine: Parsing certificate...
	I0717 10:57:42.340293    7557 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 10:57:42.493110    7557 main.go:141] libmachine: Creating SSH key...
	I0717 10:57:42.749983    7557 main.go:141] libmachine: Creating Disk image...
	I0717 10:57:42.749990    7557 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 10:57:42.750215    7557 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/disk.qcow2
	I0717 10:57:42.759811    7557 main.go:141] libmachine: STDOUT: 
	I0717 10:57:42.759828    7557 main.go:141] libmachine: STDERR: 
	I0717 10:57:42.759895    7557 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/disk.qcow2 +20000M
	I0717 10:57:42.767676    7557 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 10:57:42.767688    7557 main.go:141] libmachine: STDERR: 
	I0717 10:57:42.767699    7557 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/disk.qcow2
	I0717 10:57:42.767703    7557 main.go:141] libmachine: Starting QEMU VM...
	I0717 10:57:42.767710    7557 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:57:42.767982    7557 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:0d:43:db:cc:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/disk.qcow2
	I0717 10:57:42.769960    7557 main.go:141] libmachine: STDOUT: 
	I0717 10:57:42.769978    7557 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:57:42.769992    7557 client.go:171] duration metric: took 431.077ms to LocalClient.Create
	I0717 10:57:44.772172    7557 start.go:128] duration metric: took 2.487241917s to createHost
	I0717 10:57:44.772222    7557 start.go:83] releasing machines lock for "ha-488000", held for 2.487719s
	W0717 10:57:44.772561    7557 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-488000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-488000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:57:44.781243    7557 out.go:177] 
	W0717 10:57:44.786288    7557 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 10:57:44.786348    7557 out.go:239] * 
	* 
	W0717 10:57:44.788968    7557 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:57:44.797223    7557 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-488000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (66.609875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (109.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (59.092875ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-488000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- rollout status deployment/busybox: exit status 1 (56.653125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.582084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.621708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.854791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.225458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.694292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.710625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.983541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.81025ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.120459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.008292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.665416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.212333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.725958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.02775ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.084958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (29.313917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (109.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.16475ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (29.619416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-488000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-488000 -v=7 --alsologtostderr: exit status 83 (41.299458ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-488000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-488000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:59:34.005498    7664 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:59:34.005885    7664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:34.005889    7664 out.go:304] Setting ErrFile to fd 2...
	I0717 10:59:34.005891    7664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:34.006066    7664 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 10:59:34.006276    7664 mustload.go:65] Loading cluster: ha-488000
	I0717 10:59:34.006458    7664 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:59:34.010925    7664 out.go:177] * The control-plane node ha-488000 host is not running: state=Stopped
	I0717 10:59:34.013867    7664 out.go:177]   To start a cluster, run: "minikube start -p ha-488000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-488000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (29.941916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-488000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-488000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.563625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-488000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-488000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-488000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (30.494333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-488000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-488000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-488000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-488000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-488000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-488000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-488000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-488000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (28.677417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status --output json -v=7 --alsologtostderr: exit status 7 (29.793583ms)

                                                
                                                
-- stdout --
	{"Name":"ha-488000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:59:34.210970    7676 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:59:34.211124    7676 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:34.211132    7676 out.go:304] Setting ErrFile to fd 2...
	I0717 10:59:34.211134    7676 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:34.211260    7676 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 10:59:34.211373    7676 out.go:298] Setting JSON to true
	I0717 10:59:34.211382    7676 mustload.go:65] Loading cluster: ha-488000
	I0717 10:59:34.211452    7676 notify.go:220] Checking for updates...
	I0717 10:59:34.211564    7676 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:59:34.211575    7676 status.go:255] checking status of ha-488000 ...
	I0717 10:59:34.211797    7676 status.go:330] ha-488000 host status = "Stopped" (err=<nil>)
	I0717 10:59:34.211801    7676 status.go:343] host is not running, skipping remaining checks
	I0717 10:59:34.211803    7676 status.go:257] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-488000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (29.152833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 node stop m02 -v=7 --alsologtostderr: exit status 85 (45.400083ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:59:34.270055    7680 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:59:34.270473    7680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:34.270476    7680 out.go:304] Setting ErrFile to fd 2...
	I0717 10:59:34.270479    7680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:34.270627    7680 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 10:59:34.270863    7680 mustload.go:65] Loading cluster: ha-488000
	I0717 10:59:34.271067    7680 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:59:34.275054    7680 out.go:177] 
	W0717 10:59:34.276164    7680 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0717 10:59:34.276169    7680 out.go:239] * 
	* 
	W0717 10:59:34.278127    7680 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:59:34.282959    7680 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-488000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr: exit status 7 (29.693666ms)

                                                
                                                
-- stdout --
	ha-488000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:59:34.316027    7682 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:59:34.316174    7682 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:34.316178    7682 out.go:304] Setting ErrFile to fd 2...
	I0717 10:59:34.316180    7682 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:34.316306    7682 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 10:59:34.316420    7682 out.go:298] Setting JSON to false
	I0717 10:59:34.316431    7682 mustload.go:65] Loading cluster: ha-488000
	I0717 10:59:34.316488    7682 notify.go:220] Checking for updates...
	I0717 10:59:34.316626    7682 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:59:34.316634    7682 status.go:255] checking status of ha-488000 ...
	I0717 10:59:34.316871    7682 status.go:330] ha-488000 host status = "Stopped" (err=<nil>)
	I0717 10:59:34.316875    7682 status.go:343] host is not running, skipping remaining checks
	I0717 10:59:34.316877    7682 status.go:257] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr": ha-488000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr": ha-488000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr": ha-488000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr": ha-488000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (30.0385ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-488000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-488000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-488000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-488000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (29.278541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (55.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 node start m02 -v=7 --alsologtostderr: exit status 85 (46.25525ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:59:34.453101    7691 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:59:34.453494    7691 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:34.453498    7691 out.go:304] Setting ErrFile to fd 2...
	I0717 10:59:34.453500    7691 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:34.453670    7691 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 10:59:34.453913    7691 mustload.go:65] Loading cluster: ha-488000
	I0717 10:59:34.454107    7691 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:59:34.456934    7691 out.go:177] 
	W0717 10:59:34.461013    7691 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0717 10:59:34.461017    7691 out.go:239] * 
	* 
	W0717 10:59:34.462941    7691 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:59:34.467013    7691 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0717 10:59:34.453101    7691 out.go:291] Setting OutFile to fd 1 ...
I0717 10:59:34.453494    7691 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:59:34.453498    7691 out.go:304] Setting ErrFile to fd 2...
I0717 10:59:34.453500    7691 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:59:34.453670    7691 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
I0717 10:59:34.453913    7691 mustload.go:65] Loading cluster: ha-488000
I0717 10:59:34.454107    7691 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:59:34.456934    7691 out.go:177] 
W0717 10:59:34.461013    7691 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0717 10:59:34.461017    7691 out.go:239] * 
* 
W0717 10:59:34.462941    7691 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0717 10:59:34.467013    7691 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-488000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr: exit status 7 (29.619292ms)

                                                
                                                
-- stdout --
	ha-488000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:59:34.499932    7693 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:59:34.500075    7693 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:34.500079    7693 out.go:304] Setting ErrFile to fd 2...
	I0717 10:59:34.500081    7693 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:34.500223    7693 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 10:59:34.500349    7693 out.go:298] Setting JSON to false
	I0717 10:59:34.500358    7693 mustload.go:65] Loading cluster: ha-488000
	I0717 10:59:34.500415    7693 notify.go:220] Checking for updates...
	I0717 10:59:34.500564    7693 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:59:34.500569    7693 status.go:255] checking status of ha-488000 ...
	I0717 10:59:34.500771    7693 status.go:330] ha-488000 host status = "Stopped" (err=<nil>)
	I0717 10:59:34.500775    7693 status.go:343] host is not running, skipping remaining checks
	I0717 10:59:34.500777    7693 status.go:257] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr: exit status 7 (77.842292ms)

                                                
                                                
-- stdout --
	ha-488000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:59:35.680001    7695 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:59:35.680215    7695 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:35.680219    7695 out.go:304] Setting ErrFile to fd 2...
	I0717 10:59:35.680223    7695 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:35.680447    7695 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 10:59:35.680639    7695 out.go:298] Setting JSON to false
	I0717 10:59:35.680654    7695 mustload.go:65] Loading cluster: ha-488000
	I0717 10:59:35.680694    7695 notify.go:220] Checking for updates...
	I0717 10:59:35.680942    7695 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:59:35.680953    7695 status.go:255] checking status of ha-488000 ...
	I0717 10:59:35.681249    7695 status.go:330] ha-488000 host status = "Stopped" (err=<nil>)
	I0717 10:59:35.681255    7695 status.go:343] host is not running, skipping remaining checks
	I0717 10:59:35.681258    7695 status.go:257] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr: exit status 7 (73.772625ms)

                                                
                                                
-- stdout --
	ha-488000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:59:37.520877    7697 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:59:37.521062    7697 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:37.521067    7697 out.go:304] Setting ErrFile to fd 2...
	I0717 10:59:37.521069    7697 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:37.521248    7697 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 10:59:37.521420    7697 out.go:298] Setting JSON to false
	I0717 10:59:37.521433    7697 mustload.go:65] Loading cluster: ha-488000
	I0717 10:59:37.521476    7697 notify.go:220] Checking for updates...
	I0717 10:59:37.521719    7697 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:59:37.521726    7697 status.go:255] checking status of ha-488000 ...
	I0717 10:59:37.521995    7697 status.go:330] ha-488000 host status = "Stopped" (err=<nil>)
	I0717 10:59:37.522000    7697 status.go:343] host is not running, skipping remaining checks
	I0717 10:59:37.522002    7697 status.go:257] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr: exit status 7 (74.208625ms)

                                                
                                                
-- stdout --
	ha-488000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:59:39.145503    7699 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:59:39.145729    7699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:39.145739    7699 out.go:304] Setting ErrFile to fd 2...
	I0717 10:59:39.145742    7699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:39.145944    7699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 10:59:39.146104    7699 out.go:298] Setting JSON to false
	I0717 10:59:39.146118    7699 mustload.go:65] Loading cluster: ha-488000
	I0717 10:59:39.146152    7699 notify.go:220] Checking for updates...
	I0717 10:59:39.146362    7699 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:59:39.146370    7699 status.go:255] checking status of ha-488000 ...
	I0717 10:59:39.146662    7699 status.go:330] ha-488000 host status = "Stopped" (err=<nil>)
	I0717 10:59:39.146667    7699 status.go:343] host is not running, skipping remaining checks
	I0717 10:59:39.146670    7699 status.go:257] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr: exit status 7 (72.366667ms)

                                                
                                                
-- stdout --
	ha-488000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:59:43.221663    7704 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:59:43.221867    7704 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:43.221871    7704 out.go:304] Setting ErrFile to fd 2...
	I0717 10:59:43.221874    7704 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:43.222045    7704 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 10:59:43.222192    7704 out.go:298] Setting JSON to false
	I0717 10:59:43.222206    7704 mustload.go:65] Loading cluster: ha-488000
	I0717 10:59:43.222245    7704 notify.go:220] Checking for updates...
	I0717 10:59:43.222459    7704 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:59:43.222467    7704 status.go:255] checking status of ha-488000 ...
	I0717 10:59:43.222744    7704 status.go:330] ha-488000 host status = "Stopped" (err=<nil>)
	I0717 10:59:43.222748    7704 status.go:343] host is not running, skipping remaining checks
	I0717 10:59:43.222751    7704 status.go:257] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr: exit status 7 (72.319584ms)

                                                
                                                
-- stdout --
	ha-488000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:59:48.185508    7706 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:59:48.185717    7706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:48.185722    7706 out.go:304] Setting ErrFile to fd 2...
	I0717 10:59:48.185726    7706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:48.185919    7706 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 10:59:48.186099    7706 out.go:298] Setting JSON to false
	I0717 10:59:48.186119    7706 mustload.go:65] Loading cluster: ha-488000
	I0717 10:59:48.186164    7706 notify.go:220] Checking for updates...
	I0717 10:59:48.186409    7706 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:59:48.186418    7706 status.go:255] checking status of ha-488000 ...
	I0717 10:59:48.186726    7706 status.go:330] ha-488000 host status = "Stopped" (err=<nil>)
	I0717 10:59:48.186732    7706 status.go:343] host is not running, skipping remaining checks
	I0717 10:59:48.186735    7706 status.go:257] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr: exit status 7 (73.798833ms)

                                                
                                                
-- stdout --
	ha-488000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:59:58.835343    7709 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:59:58.835543    7709 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:58.835547    7709 out.go:304] Setting ErrFile to fd 2...
	I0717 10:59:58.835551    7709 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:58.835750    7709 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 10:59:58.835917    7709 out.go:298] Setting JSON to false
	I0717 10:59:58.835930    7709 mustload.go:65] Loading cluster: ha-488000
	I0717 10:59:58.835967    7709 notify.go:220] Checking for updates...
	I0717 10:59:58.836235    7709 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:59:58.836245    7709 status.go:255] checking status of ha-488000 ...
	I0717 10:59:58.836533    7709 status.go:330] ha-488000 host status = "Stopped" (err=<nil>)
	I0717 10:59:58.836538    7709 status.go:343] host is not running, skipping remaining checks
	I0717 10:59:58.836541    7709 status.go:257] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr: exit status 7 (70.412291ms)

                                                
                                                
-- stdout --
	ha-488000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:00:15.430211    7731 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:00:15.430397    7731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:00:15.430402    7731 out.go:304] Setting ErrFile to fd 2...
	I0717 11:00:15.430405    7731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:00:15.430585    7731 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:00:15.430776    7731 out.go:298] Setting JSON to false
	I0717 11:00:15.430790    7731 mustload.go:65] Loading cluster: ha-488000
	I0717 11:00:15.430825    7731 notify.go:220] Checking for updates...
	I0717 11:00:15.431064    7731 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:00:15.431072    7731 status.go:255] checking status of ha-488000 ...
	I0717 11:00:15.431351    7731 status.go:330] ha-488000 host status = "Stopped" (err=<nil>)
	I0717 11:00:15.431356    7731 status.go:343] host is not running, skipping remaining checks
	I0717 11:00:15.431359    7731 status.go:257] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr: exit status 7 (74.113041ms)

                                                
                                                
-- stdout --
	ha-488000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:00:30.349878    7737 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:00:30.350069    7737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:00:30.350073    7737 out.go:304] Setting ErrFile to fd 2...
	I0717 11:00:30.350076    7737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:00:30.350249    7737 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:00:30.350404    7737 out.go:298] Setting JSON to false
	I0717 11:00:30.350417    7737 mustload.go:65] Loading cluster: ha-488000
	I0717 11:00:30.350462    7737 notify.go:220] Checking for updates...
	I0717 11:00:30.350694    7737 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:00:30.350702    7737 status.go:255] checking status of ha-488000 ...
	I0717 11:00:30.350982    7737 status.go:330] ha-488000 host status = "Stopped" (err=<nil>)
	I0717 11:00:30.350987    7737 status.go:343] host is not running, skipping remaining checks
	I0717 11:00:30.350990    7737 status.go:257] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (32.817ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (55.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-488000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-488000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-488000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-488000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-488000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-488000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-488000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-488000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (30.61875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-488000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-488000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-488000 -v=7 --alsologtostderr: (1.975017208s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-488000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-488000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.225238667s)

                                                
                                                
-- stdout --
	* [ha-488000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-488000" primary control-plane node in "ha-488000" cluster
	* Restarting existing qemu2 VM for "ha-488000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-488000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:00:32.529181    7760 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:00:32.529332    7760 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:00:32.529336    7760 out.go:304] Setting ErrFile to fd 2...
	I0717 11:00:32.529345    7760 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:00:32.529510    7760 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:00:32.530734    7760 out.go:298] Setting JSON to false
	I0717 11:00:32.551001    7760 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5404,"bootTime":1721233828,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 11:00:32.551079    7760 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:00:32.555742    7760 out.go:177] * [ha-488000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:00:32.563712    7760 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 11:00:32.563772    7760 notify.go:220] Checking for updates...
	I0717 11:00:32.570606    7760 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:00:32.573705    7760 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:00:32.576710    7760 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:00:32.579641    7760 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 11:00:32.582657    7760 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:00:32.585949    7760 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:00:32.586004    7760 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:00:32.589641    7760 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 11:00:32.596670    7760 start.go:297] selected driver: qemu2
	I0717 11:00:32.596677    7760 start.go:901] validating driver "qemu2" against &{Name:ha-488000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.2 ClusterName:ha-488000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:00:32.596731    7760 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:00:32.599415    7760 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:00:32.599502    7760 cni.go:84] Creating CNI manager for ""
	I0717 11:00:32.599510    7760 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0717 11:00:32.599561    7760 start.go:340] cluster config:
	{Name:ha-488000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-488000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:00:32.603770    7760 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:00:32.611653    7760 out.go:177] * Starting "ha-488000" primary control-plane node in "ha-488000" cluster
	I0717 11:00:32.614660    7760 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:00:32.614676    7760 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:00:32.614688    7760 cache.go:56] Caching tarball of preloaded images
	I0717 11:00:32.614750    7760 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:00:32.614756    7760 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:00:32.614813    7760 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/ha-488000/config.json ...
	I0717 11:00:32.615241    7760 start.go:360] acquireMachinesLock for ha-488000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:00:32.615276    7760 start.go:364] duration metric: took 28.292µs to acquireMachinesLock for "ha-488000"
	I0717 11:00:32.615285    7760 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:00:32.615291    7760 fix.go:54] fixHost starting: 
	I0717 11:00:32.615408    7760 fix.go:112] recreateIfNeeded on ha-488000: state=Stopped err=<nil>
	W0717 11:00:32.615417    7760 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:00:32.623660    7760 out.go:177] * Restarting existing qemu2 VM for "ha-488000" ...
	I0717 11:00:32.627638    7760 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:00:32.627672    7760 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:0d:43:db:cc:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/disk.qcow2
	I0717 11:00:32.629692    7760 main.go:141] libmachine: STDOUT: 
	I0717 11:00:32.629712    7760 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:00:32.629739    7760 fix.go:56] duration metric: took 14.446833ms for fixHost
	I0717 11:00:32.629744    7760 start.go:83] releasing machines lock for "ha-488000", held for 14.463791ms
	W0717 11:00:32.629749    7760 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:00:32.629783    7760 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:00:32.629788    7760 start.go:729] Will try again in 5 seconds ...
	I0717 11:00:37.631916    7760 start.go:360] acquireMachinesLock for ha-488000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:00:37.632402    7760 start.go:364] duration metric: took 404.917µs to acquireMachinesLock for "ha-488000"
	I0717 11:00:37.632527    7760 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:00:37.632546    7760 fix.go:54] fixHost starting: 
	I0717 11:00:37.633225    7760 fix.go:112] recreateIfNeeded on ha-488000: state=Stopped err=<nil>
	W0717 11:00:37.633253    7760 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:00:37.637668    7760 out.go:177] * Restarting existing qemu2 VM for "ha-488000" ...
	I0717 11:00:37.646585    7760 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:00:37.646819    7760 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:0d:43:db:cc:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/disk.qcow2
	I0717 11:00:37.656005    7760 main.go:141] libmachine: STDOUT: 
	I0717 11:00:37.656067    7760 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:00:37.656145    7760 fix.go:56] duration metric: took 23.600125ms for fixHost
	I0717 11:00:37.656168    7760 start.go:83] releasing machines lock for "ha-488000", held for 23.742416ms
	W0717 11:00:37.656325    7760 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-488000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-488000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:00:37.663526    7760 out.go:177] 
	W0717 11:00:37.667688    7760 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:00:37.667709    7760 out.go:239] * 
	* 
	W0717 11:00:37.670527    7760 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:00:37.678616    7760 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-488000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-488000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (33.123375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 node delete m03 -v=7 --alsologtostderr: exit status 83 (44.209084ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-488000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-488000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:00:37.820926    7772 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:00:37.821351    7772 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:00:37.821355    7772 out.go:304] Setting ErrFile to fd 2...
	I0717 11:00:37.821358    7772 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:00:37.821531    7772 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:00:37.821762    7772 mustload.go:65] Loading cluster: ha-488000
	I0717 11:00:37.821953    7772 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:00:37.826620    7772 out.go:177] * The control-plane node ha-488000 host is not running: state=Stopped
	I0717 11:00:37.832144    7772 out.go:177]   To start a cluster, run: "minikube start -p ha-488000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-488000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr: exit status 7 (30.427791ms)

                                                
                                                
-- stdout --
	ha-488000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:00:37.865279    7774 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:00:37.865433    7774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:00:37.865437    7774 out.go:304] Setting ErrFile to fd 2...
	I0717 11:00:37.865439    7774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:00:37.865577    7774 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:00:37.865703    7774 out.go:298] Setting JSON to false
	I0717 11:00:37.865713    7774 mustload.go:65] Loading cluster: ha-488000
	I0717 11:00:37.865779    7774 notify.go:220] Checking for updates...
	I0717 11:00:37.865895    7774 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:00:37.865906    7774 status.go:255] checking status of ha-488000 ...
	I0717 11:00:37.866134    7774 status.go:330] ha-488000 host status = "Stopped" (err=<nil>)
	I0717 11:00:37.866138    7774 status.go:343] host is not running, skipping remaining checks
	I0717 11:00:37.866140    7774 status.go:257] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (29.238833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-488000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-488000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-488000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-488000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (29.091041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (2.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-488000 stop -v=7 --alsologtostderr: (1.983680542s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr: exit status 7 (65.778375ms)

                                                
                                                
-- stdout --
	ha-488000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:00:40.019600    7797 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:00:40.019796    7797 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:00:40.019801    7797 out.go:304] Setting ErrFile to fd 2...
	I0717 11:00:40.019804    7797 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:00:40.019985    7797 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:00:40.020139    7797 out.go:298] Setting JSON to false
	I0717 11:00:40.020152    7797 mustload.go:65] Loading cluster: ha-488000
	I0717 11:00:40.020182    7797 notify.go:220] Checking for updates...
	I0717 11:00:40.020426    7797 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:00:40.020434    7797 status.go:255] checking status of ha-488000 ...
	I0717 11:00:40.020719    7797 status.go:330] ha-488000 host status = "Stopped" (err=<nil>)
	I0717 11:00:40.020724    7797 status.go:343] host is not running, skipping remaining checks
	I0717 11:00:40.020727    7797 status.go:257] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr": ha-488000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr": ha-488000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr": ha-488000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (33.0035ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (2.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-488000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-488000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.179698458s)

                                                
                                                
-- stdout --
	* [ha-488000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-488000" primary control-plane node in "ha-488000" cluster
	* Restarting existing qemu2 VM for "ha-488000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-488000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:00:40.082587    7801 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:00:40.082727    7801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:00:40.082731    7801 out.go:304] Setting ErrFile to fd 2...
	I0717 11:00:40.082733    7801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:00:40.082866    7801 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:00:40.083858    7801 out.go:298] Setting JSON to false
	I0717 11:00:40.099959    7801 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5412,"bootTime":1721233828,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 11:00:40.100030    7801 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:00:40.103834    7801 out.go:177] * [ha-488000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:00:40.110628    7801 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 11:00:40.110694    7801 notify.go:220] Checking for updates...
	I0717 11:00:40.117611    7801 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:00:40.120575    7801 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:00:40.123607    7801 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:00:40.126605    7801 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 11:00:40.129556    7801 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:00:40.132816    7801 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:00:40.133086    7801 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:00:40.137599    7801 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 11:00:40.144615    7801 start.go:297] selected driver: qemu2
	I0717 11:00:40.144623    7801 start.go:901] validating driver "qemu2" against &{Name:ha-488000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.2 ClusterName:ha-488000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:00:40.144704    7801 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:00:40.146870    7801 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:00:40.146896    7801 cni.go:84] Creating CNI manager for ""
	I0717 11:00:40.146901    7801 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0717 11:00:40.146941    7801 start.go:340] cluster config:
	{Name:ha-488000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-488000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:00:40.150466    7801 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:00:40.156549    7801 out.go:177] * Starting "ha-488000" primary control-plane node in "ha-488000" cluster
	I0717 11:00:40.160620    7801 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:00:40.160635    7801 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:00:40.160645    7801 cache.go:56] Caching tarball of preloaded images
	I0717 11:00:40.160709    7801 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:00:40.160715    7801 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:00:40.160772    7801 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/ha-488000/config.json ...
	I0717 11:00:40.161200    7801 start.go:360] acquireMachinesLock for ha-488000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:00:40.161232    7801 start.go:364] duration metric: took 26.208µs to acquireMachinesLock for "ha-488000"
	I0717 11:00:40.161240    7801 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:00:40.161247    7801 fix.go:54] fixHost starting: 
	I0717 11:00:40.161358    7801 fix.go:112] recreateIfNeeded on ha-488000: state=Stopped err=<nil>
	W0717 11:00:40.161366    7801 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:00:40.169600    7801 out.go:177] * Restarting existing qemu2 VM for "ha-488000" ...
	I0717 11:00:40.173587    7801 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:00:40.173626    7801 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:0d:43:db:cc:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/disk.qcow2
	I0717 11:00:40.175589    7801 main.go:141] libmachine: STDOUT: 
	I0717 11:00:40.175610    7801 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:00:40.175638    7801 fix.go:56] duration metric: took 14.391458ms for fixHost
	I0717 11:00:40.175642    7801 start.go:83] releasing machines lock for "ha-488000", held for 14.40625ms
	W0717 11:00:40.175648    7801 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:00:40.175691    7801 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:00:40.175695    7801 start.go:729] Will try again in 5 seconds ...
	I0717 11:00:45.177912    7801 start.go:360] acquireMachinesLock for ha-488000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:00:45.178467    7801 start.go:364] duration metric: took 456.625µs to acquireMachinesLock for "ha-488000"
	I0717 11:00:45.178620    7801 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:00:45.178643    7801 fix.go:54] fixHost starting: 
	I0717 11:00:45.179392    7801 fix.go:112] recreateIfNeeded on ha-488000: state=Stopped err=<nil>
	W0717 11:00:45.179419    7801 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:00:45.183863    7801 out.go:177] * Restarting existing qemu2 VM for "ha-488000" ...
	I0717 11:00:45.190802    7801 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:00:45.191021    7801 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:0d:43:db:cc:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/ha-488000/disk.qcow2
	I0717 11:00:45.200793    7801 main.go:141] libmachine: STDOUT: 
	I0717 11:00:45.200853    7801 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:00:45.200933    7801 fix.go:56] duration metric: took 22.2935ms for fixHost
	I0717 11:00:45.200948    7801 start.go:83] releasing machines lock for "ha-488000", held for 22.460333ms
	W0717 11:00:45.201101    7801 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-488000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-488000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:00:45.207768    7801 out.go:177] 
	W0717 11:00:45.211877    7801 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:00:45.211905    7801 out.go:239] * 
	* 
	W0717 11:00:45.214450    7801 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:00:45.221825    7801 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-488000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (67.92025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-488000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-488000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-488000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-488000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (29.087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-488000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-488000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.435916ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-488000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-488000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:00:45.412125    7816 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:00:45.412281    7816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:00:45.412284    7816 out.go:304] Setting ErrFile to fd 2...
	I0717 11:00:45.412286    7816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:00:45.412424    7816 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:00:45.412661    7816 mustload.go:65] Loading cluster: ha-488000
	I0717 11:00:45.412850    7816 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:00:45.417135    7816 out.go:177] * The control-plane node ha-488000 host is not running: state=Stopped
	I0717 11:00:45.421075    7816 out.go:177]   To start a cluster, run: "minikube start -p ha-488000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-488000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (29.208375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-488000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-488000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-488000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-488000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-488000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-488000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-488000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-488000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (29.129333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-095000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-095000 --driver=qemu2 : exit status 80 (9.751268708s)

                                                
                                                
-- stdout --
	* [image-095000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-095000" primary control-plane node in "image-095000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-095000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-095000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-095000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-095000 -n image-095000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-095000 -n image-095000: exit status 7 (67.124667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-095000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.82s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-288000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-288000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.785146292s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4c53dc02-c349-4fc3-bf50-84377a712d20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-288000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d39fc0f6-269c-4c7e-a6f0-1f2c8056a7d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19282"}}
	{"specversion":"1.0","id":"c7717225-c411-4ff1-b8ec-72ac3e667162","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig"}}
	{"specversion":"1.0","id":"0f2647eb-78c7-46df-b429-e9fc724f064d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"e2d60dd2-609e-4674-bd10-605f2f05d755","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d5cee766-7933-4f20-9f5e-25c4204661be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube"}}
	{"specversion":"1.0","id":"4a4b0474-ecff-4f4d-82e1-bc77a7a3e1a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e0701178-bcc6-43de-a119-746b00476f96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8a5e76ac-e550-48f5-8488-7d41eec013d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"f601c7cd-968d-46ea-b665-60b804b3e9d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-288000\" primary control-plane node in \"json-output-288000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"18891323-f2f8-4a39-b264-dbfa0a3824e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"e157a42e-8679-4839-8d45-9c6147ad3327","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-288000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"15849c9d-d899-4015-ac83-f62308fddd01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"730672c3-da63-4322-9bb7-e6b7580e6699","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"bafa5f95-1ed6-4cca-9788-a94853c0256f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-288000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"aaa43086-0e50-4b39-ab0c-eb5f3aa0b8c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"40deb46c-47b5-44e7-ac7f-e84da514590e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-288000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-288000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-288000 --output=json --user=testUser: exit status 83 (80.123291ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"856d5849-7a2b-4494-a725-ea6075159eb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-288000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"b8569304-81c2-4b5c-b59e-8494e9d5a828","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-288000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-288000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-288000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-288000 --output=json --user=testUser: exit status 83 (43.237333ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-288000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-288000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-288000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-288000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.07s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-450000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-450000 --driver=qemu2 : exit status 80 (9.77152575s)

                                                
                                                
-- stdout --
	* [first-450000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-450000" primary control-plane node in "first-450000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-450000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-450000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-450000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-17 11:01:19.082642 -0700 PDT m=+461.728196542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-451000 -n second-451000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-451000 -n second-451000: exit status 85 (82.654458ms)

                                                
                                                
-- stdout --
	* Profile "second-451000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-451000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-451000" host is not running, skipping log retrieval (state="* Profile \"second-451000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-451000\"")
helpers_test.go:175: Cleaning up "second-451000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-451000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-17 11:01:19.273426 -0700 PDT m=+461.918979709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-450000 -n first-450000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-450000 -n first-450000: exit status 7 (30.004542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-450000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-450000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-450000
--- FAIL: TestMinikubeProfile (10.07s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-617000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-617000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.847356667s)

                                                
                                                
-- stdout --
	* [mount-start-1-617000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-617000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-617000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-617000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-617000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-617000 -n mount-start-1-617000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-617000 -n mount-start-1-617000: exit status 7 (71.19025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-617000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.92s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-931000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-931000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.838614959s)

                                                
                                                
-- stdout --
	* [multinode-931000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-931000" primary control-plane node in "multinode-931000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-931000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:01:29.501795    7968 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:01:29.502006    7968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:01:29.502009    7968 out.go:304] Setting ErrFile to fd 2...
	I0717 11:01:29.502011    7968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:01:29.502140    7968 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:01:29.503218    7968 out.go:298] Setting JSON to false
	I0717 11:01:29.519169    7968 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5461,"bootTime":1721233828,"procs":445,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 11:01:29.519277    7968 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:01:29.525058    7968 out.go:177] * [multinode-931000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:01:29.532009    7968 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 11:01:29.532087    7968 notify.go:220] Checking for updates...
	I0717 11:01:29.538983    7968 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:01:29.542079    7968 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:01:29.545016    7968 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:01:29.548052    7968 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 11:01:29.550970    7968 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:01:29.554144    7968 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:01:29.558050    7968 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:01:29.564927    7968 start.go:297] selected driver: qemu2
	I0717 11:01:29.564933    7968 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:01:29.564939    7968 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:01:29.567185    7968 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:01:29.570058    7968 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:01:29.573102    7968 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:01:29.573116    7968 cni.go:84] Creating CNI manager for ""
	I0717 11:01:29.573125    7968 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0717 11:01:29.573138    7968 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 11:01:29.573176    7968 start.go:340] cluster config:
	{Name:multinode-931000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-931000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:01:29.576936    7968 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:01:29.585021    7968 out.go:177] * Starting "multinode-931000" primary control-plane node in "multinode-931000" cluster
	I0717 11:01:29.589035    7968 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:01:29.589056    7968 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:01:29.589070    7968 cache.go:56] Caching tarball of preloaded images
	I0717 11:01:29.589137    7968 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:01:29.589143    7968 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:01:29.589374    7968 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/multinode-931000/config.json ...
	I0717 11:01:29.589386    7968 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/multinode-931000/config.json: {Name:mk1353e9fd38b7d3b7f4e261098b475b9d287f64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:01:29.589600    7968 start.go:360] acquireMachinesLock for multinode-931000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:01:29.589636    7968 start.go:364] duration metric: took 29.375µs to acquireMachinesLock for "multinode-931000"
	I0717 11:01:29.589647    7968 start.go:93] Provisioning new machine with config: &{Name:multinode-931000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:multinode-931000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:01:29.589684    7968 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:01:29.597975    7968 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 11:01:29.616034    7968 start.go:159] libmachine.API.Create for "multinode-931000" (driver="qemu2")
	I0717 11:01:29.616067    7968 client.go:168] LocalClient.Create starting
	I0717 11:01:29.616129    7968 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:01:29.616162    7968 main.go:141] libmachine: Decoding PEM data...
	I0717 11:01:29.616174    7968 main.go:141] libmachine: Parsing certificate...
	I0717 11:01:29.616219    7968 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:01:29.616243    7968 main.go:141] libmachine: Decoding PEM data...
	I0717 11:01:29.616251    7968 main.go:141] libmachine: Parsing certificate...
	I0717 11:01:29.616676    7968 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:01:29.755452    7968 main.go:141] libmachine: Creating SSH key...
	I0717 11:01:29.902870    7968 main.go:141] libmachine: Creating Disk image...
	I0717 11:01:29.902876    7968 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:01:29.903096    7968 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/disk.qcow2
	I0717 11:01:29.912811    7968 main.go:141] libmachine: STDOUT: 
	I0717 11:01:29.912831    7968 main.go:141] libmachine: STDERR: 
	I0717 11:01:29.912879    7968 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/disk.qcow2 +20000M
	I0717 11:01:29.920689    7968 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:01:29.920708    7968 main.go:141] libmachine: STDERR: 
	I0717 11:01:29.920724    7968 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/disk.qcow2
	I0717 11:01:29.920728    7968 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:01:29.920739    7968 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:01:29.920762    7968 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:6b:87:07:f1:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/disk.qcow2
	I0717 11:01:29.922381    7968 main.go:141] libmachine: STDOUT: 
	I0717 11:01:29.922396    7968 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:01:29.922420    7968 client.go:171] duration metric: took 306.349333ms to LocalClient.Create
	I0717 11:01:31.924600    7968 start.go:128] duration metric: took 2.334893916s to createHost
	I0717 11:01:31.924660    7968 start.go:83] releasing machines lock for "multinode-931000", held for 2.335011291s
	W0717 11:01:31.924708    7968 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:01:31.934575    7968 out.go:177] * Deleting "multinode-931000" in qemu2 ...
	W0717 11:01:31.960546    7968 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:01:31.960578    7968 start.go:729] Will try again in 5 seconds ...
	I0717 11:01:36.962812    7968 start.go:360] acquireMachinesLock for multinode-931000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:01:36.963288    7968 start.go:364] duration metric: took 348.875µs to acquireMachinesLock for "multinode-931000"
	I0717 11:01:36.963397    7968 start.go:93] Provisioning new machine with config: &{Name:multinode-931000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:multinode-931000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:01:36.963676    7968 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:01:36.976393    7968 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 11:01:37.026910    7968 start.go:159] libmachine.API.Create for "multinode-931000" (driver="qemu2")
	I0717 11:01:37.026947    7968 client.go:168] LocalClient.Create starting
	I0717 11:01:37.027067    7968 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:01:37.027134    7968 main.go:141] libmachine: Decoding PEM data...
	I0717 11:01:37.027152    7968 main.go:141] libmachine: Parsing certificate...
	I0717 11:01:37.027225    7968 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:01:37.027271    7968 main.go:141] libmachine: Decoding PEM data...
	I0717 11:01:37.027283    7968 main.go:141] libmachine: Parsing certificate...
	I0717 11:01:37.027781    7968 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:01:37.175288    7968 main.go:141] libmachine: Creating SSH key...
	I0717 11:01:37.253206    7968 main.go:141] libmachine: Creating Disk image...
	I0717 11:01:37.253211    7968 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:01:37.253392    7968 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/disk.qcow2
	I0717 11:01:37.262704    7968 main.go:141] libmachine: STDOUT: 
	I0717 11:01:37.262724    7968 main.go:141] libmachine: STDERR: 
	I0717 11:01:37.262777    7968 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/disk.qcow2 +20000M
	I0717 11:01:37.270498    7968 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:01:37.270510    7968 main.go:141] libmachine: STDERR: 
	I0717 11:01:37.270520    7968 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/disk.qcow2
	I0717 11:01:37.270525    7968 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:01:37.270534    7968 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:01:37.270576    7968 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:72:e4:14:c1:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/disk.qcow2
	I0717 11:01:37.272150    7968 main.go:141] libmachine: STDOUT: 
	I0717 11:01:37.272166    7968 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:01:37.272178    7968 client.go:171] duration metric: took 245.226417ms to LocalClient.Create
	I0717 11:01:39.274351    7968 start.go:128] duration metric: took 2.310647542s to createHost
	I0717 11:01:39.274465    7968 start.go:83] releasing machines lock for "multinode-931000", held for 2.311103042s
	W0717 11:01:39.274820    7968 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-931000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-931000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:01:39.285483    7968 out.go:177] 
	W0717 11:01:39.288461    7968 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:01:39.288494    7968 out.go:239] * 
	* 
	W0717 11:01:39.291127    7968 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:01:39.298372    7968 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-931000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000: exit status 7 (67.736625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-931000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (80.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-931000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-931000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (58.768375ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-931000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-931000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-931000 -- rollout status deployment/busybox: exit status 1 (56.7185ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-931000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-931000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-931000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.691708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-931000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-931000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-931000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.321167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-931000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-931000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-931000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.176917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-931000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-931000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-931000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.596708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-931000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-931000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-931000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.724292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-931000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-931000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-931000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.269333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-931000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-931000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-931000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.47775ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-931000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-931000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-931000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.683375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-931000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-931000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-931000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.428584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-931000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-931000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-931000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.186ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-931000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-931000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-931000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.998125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-931000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-931000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-931000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.93325ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-931000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-931000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-931000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.990792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-931000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-931000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-931000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.600542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-931000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000: exit status 7 (29.614083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-931000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (80.40s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-931000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-931000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.63775ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-931000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000: exit status 7 (29.152834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-931000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-931000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-931000 -v 3 --alsologtostderr: exit status 83 (39.627042ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-931000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-931000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:02:59.897356    8056 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:02:59.897527    8056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:02:59.897530    8056 out.go:304] Setting ErrFile to fd 2...
	I0717 11:02:59.897536    8056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:02:59.897652    8056 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:02:59.897892    8056 mustload.go:65] Loading cluster: multinode-931000
	I0717 11:02:59.898082    8056 config.go:182] Loaded profile config "multinode-931000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:02:59.901476    8056 out.go:177] * The control-plane node multinode-931000 host is not running: state=Stopped
	I0717 11:02:59.905396    8056 out.go:177]   To start a cluster, run: "minikube start -p multinode-931000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-931000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000: exit status 7 (30.056958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-931000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-931000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-931000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.131ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-931000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-931000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-931000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000: exit status 7 (29.631792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-931000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-931000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-931000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-931000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"multinode-931000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000: exit status 7 (29.646583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-931000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-931000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-931000 status --output json --alsologtostderr: exit status 7 (30.157416ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-931000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:03:00.099971    8068 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:03:00.100138    8068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:00.100141    8068 out.go:304] Setting ErrFile to fd 2...
	I0717 11:03:00.100144    8068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:00.100264    8068 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:03:00.100388    8068 out.go:298] Setting JSON to true
	I0717 11:03:00.100401    8068 mustload.go:65] Loading cluster: multinode-931000
	I0717 11:03:00.100453    8068 notify.go:220] Checking for updates...
	I0717 11:03:00.100618    8068 config.go:182] Loaded profile config "multinode-931000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:03:00.100625    8068 status.go:255] checking status of multinode-931000 ...
	I0717 11:03:00.100825    8068 status.go:330] multinode-931000 host status = "Stopped" (err=<nil>)
	I0717 11:03:00.100828    8068 status.go:343] host is not running, skipping remaining checks
	I0717 11:03:00.100830    8068 status.go:257] multinode-931000 status: &{Name:multinode-931000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-931000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000: exit status 7 (28.919792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-931000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-931000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-931000 node stop m03: exit status 85 (50.125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-931000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-931000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-931000 status: exit status 7 (29.42675ms)

                                                
                                                
-- stdout --
	multinode-931000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-931000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-931000 status --alsologtostderr: exit status 7 (30.091042ms)

                                                
                                                
-- stdout --
	multinode-931000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:03:00.239345    8076 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:03:00.239483    8076 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:00.239486    8076 out.go:304] Setting ErrFile to fd 2...
	I0717 11:03:00.239488    8076 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:00.239623    8076 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:03:00.239753    8076 out.go:298] Setting JSON to false
	I0717 11:03:00.239765    8076 mustload.go:65] Loading cluster: multinode-931000
	I0717 11:03:00.239812    8076 notify.go:220] Checking for updates...
	I0717 11:03:00.239982    8076 config.go:182] Loaded profile config "multinode-931000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:03:00.239987    8076 status.go:255] checking status of multinode-931000 ...
	I0717 11:03:00.240188    8076 status.go:330] multinode-931000 host status = "Stopped" (err=<nil>)
	I0717 11:03:00.240192    8076 status.go:343] host is not running, skipping remaining checks
	I0717 11:03:00.240197    8076 status.go:257] multinode-931000 status: &{Name:multinode-931000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-931000 status --alsologtostderr": multinode-931000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000: exit status 7 (29.898583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-931000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (53.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-931000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-931000 node start m03 -v=7 --alsologtostderr: exit status 85 (47.046ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:03:00.298970    8080 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:03:00.299366    8080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:00.299371    8080 out.go:304] Setting ErrFile to fd 2...
	I0717 11:03:00.299374    8080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:00.299544    8080 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:03:00.299767    8080 mustload.go:65] Loading cluster: multinode-931000
	I0717 11:03:00.299943    8080 config.go:182] Loaded profile config "multinode-931000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:03:00.304474    8080 out.go:177] 
	W0717 11:03:00.307456    8080 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0717 11:03:00.307460    8080 out.go:239] * 
	* 
	W0717 11:03:00.309444    8080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:03:00.313439    8080 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0717 11:03:00.298970    8080 out.go:291] Setting OutFile to fd 1 ...
I0717 11:03:00.299366    8080 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 11:03:00.299371    8080 out.go:304] Setting ErrFile to fd 2...
I0717 11:03:00.299374    8080 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 11:03:00.299544    8080 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
I0717 11:03:00.299767    8080 mustload.go:65] Loading cluster: multinode-931000
I0717 11:03:00.299943    8080 config.go:182] Loaded profile config "multinode-931000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 11:03:00.304474    8080 out.go:177] 
W0717 11:03:00.307456    8080 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0717 11:03:00.307460    8080 out.go:239] * 
* 
W0717 11:03:00.309444    8080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0717 11:03:00.313439    8080 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-931000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-931000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-931000 status -v=7 --alsologtostderr: exit status 7 (30.437ms)

                                                
                                                
-- stdout --
	multinode-931000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:03:00.347260    8082 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:03:00.347408    8082 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:00.347411    8082 out.go:304] Setting ErrFile to fd 2...
	I0717 11:03:00.347414    8082 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:00.347537    8082 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:03:00.347666    8082 out.go:298] Setting JSON to false
	I0717 11:03:00.347676    8082 mustload.go:65] Loading cluster: multinode-931000
	I0717 11:03:00.347741    8082 notify.go:220] Checking for updates...
	I0717 11:03:00.347900    8082 config.go:182] Loaded profile config "multinode-931000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:03:00.347906    8082 status.go:255] checking status of multinode-931000 ...
	I0717 11:03:00.348101    8082 status.go:330] multinode-931000 host status = "Stopped" (err=<nil>)
	I0717 11:03:00.348105    8082 status.go:343] host is not running, skipping remaining checks
	I0717 11:03:00.348108    8082 status.go:257] multinode-931000 status: &{Name:multinode-931000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-931000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-931000 status -v=7 --alsologtostderr: exit status 7 (69.502208ms)

                                                
                                                
-- stdout --
	multinode-931000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:03:01.642663    8084 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:03:01.642866    8084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:01.642871    8084 out.go:304] Setting ErrFile to fd 2...
	I0717 11:03:01.642875    8084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:01.643048    8084 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:03:01.643223    8084 out.go:298] Setting JSON to false
	I0717 11:03:01.643247    8084 mustload.go:65] Loading cluster: multinode-931000
	I0717 11:03:01.643282    8084 notify.go:220] Checking for updates...
	I0717 11:03:01.643550    8084 config.go:182] Loaded profile config "multinode-931000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:03:01.643559    8084 status.go:255] checking status of multinode-931000 ...
	I0717 11:03:01.643864    8084 status.go:330] multinode-931000 host status = "Stopped" (err=<nil>)
	I0717 11:03:01.643869    8084 status.go:343] host is not running, skipping remaining checks
	I0717 11:03:01.643872    8084 status.go:257] multinode-931000 status: &{Name:multinode-931000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-931000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-931000 status -v=7 --alsologtostderr: exit status 7 (72.217ms)

                                                
                                                
-- stdout --
	multinode-931000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:03:03.139921    8086 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:03:03.140144    8086 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:03.140149    8086 out.go:304] Setting ErrFile to fd 2...
	I0717 11:03:03.140152    8086 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:03.140338    8086 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:03:03.140505    8086 out.go:298] Setting JSON to false
	I0717 11:03:03.140517    8086 mustload.go:65] Loading cluster: multinode-931000
	I0717 11:03:03.140561    8086 notify.go:220] Checking for updates...
	I0717 11:03:03.140776    8086 config.go:182] Loaded profile config "multinode-931000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:03:03.140785    8086 status.go:255] checking status of multinode-931000 ...
	I0717 11:03:03.141064    8086 status.go:330] multinode-931000 host status = "Stopped" (err=<nil>)
	I0717 11:03:03.141068    8086 status.go:343] host is not running, skipping remaining checks
	I0717 11:03:03.141071    8086 status.go:257] multinode-931000 status: &{Name:multinode-931000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-931000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-931000 status -v=7 --alsologtostderr: exit status 7 (72.890083ms)

                                                
                                                
-- stdout --
	multinode-931000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:03:05.419647    8090 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:03:05.419867    8090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:05.419871    8090 out.go:304] Setting ErrFile to fd 2...
	I0717 11:03:05.419874    8090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:05.420065    8090 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:03:05.420224    8090 out.go:298] Setting JSON to false
	I0717 11:03:05.420241    8090 mustload.go:65] Loading cluster: multinode-931000
	I0717 11:03:05.420278    8090 notify.go:220] Checking for updates...
	I0717 11:03:05.420513    8090 config.go:182] Loaded profile config "multinode-931000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:03:05.420521    8090 status.go:255] checking status of multinode-931000 ...
	I0717 11:03:05.420794    8090 status.go:330] multinode-931000 host status = "Stopped" (err=<nil>)
	I0717 11:03:05.420799    8090 status.go:343] host is not running, skipping remaining checks
	I0717 11:03:05.420802    8090 status.go:257] multinode-931000 status: &{Name:multinode-931000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-931000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-931000 status -v=7 --alsologtostderr: exit status 7 (72.504625ms)

                                                
                                                
-- stdout --
	multinode-931000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:03:08.083942    8092 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:03:08.084122    8092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:08.084126    8092 out.go:304] Setting ErrFile to fd 2...
	I0717 11:03:08.084130    8092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:08.084312    8092 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:03:08.084468    8092 out.go:298] Setting JSON to false
	I0717 11:03:08.084481    8092 mustload.go:65] Loading cluster: multinode-931000
	I0717 11:03:08.084520    8092 notify.go:220] Checking for updates...
	I0717 11:03:08.084717    8092 config.go:182] Loaded profile config "multinode-931000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:03:08.084724    8092 status.go:255] checking status of multinode-931000 ...
	I0717 11:03:08.085012    8092 status.go:330] multinode-931000 host status = "Stopped" (err=<nil>)
	I0717 11:03:08.085017    8092 status.go:343] host is not running, skipping remaining checks
	I0717 11:03:08.085020    8092 status.go:257] multinode-931000 status: &{Name:multinode-931000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-931000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-931000 status -v=7 --alsologtostderr: exit status 7 (71.767667ms)

                                                
                                                
-- stdout --
	multinode-931000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:03:12.696902    8096 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:03:12.697084    8096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:12.697088    8096 out.go:304] Setting ErrFile to fd 2...
	I0717 11:03:12.697092    8096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:12.697254    8096 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:03:12.697407    8096 out.go:298] Setting JSON to false
	I0717 11:03:12.697420    8096 mustload.go:65] Loading cluster: multinode-931000
	I0717 11:03:12.697448    8096 notify.go:220] Checking for updates...
	I0717 11:03:12.697684    8096 config.go:182] Loaded profile config "multinode-931000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:03:12.697691    8096 status.go:255] checking status of multinode-931000 ...
	I0717 11:03:12.697981    8096 status.go:330] multinode-931000 host status = "Stopped" (err=<nil>)
	I0717 11:03:12.697986    8096 status.go:343] host is not running, skipping remaining checks
	I0717 11:03:12.697989    8096 status.go:257] multinode-931000 status: &{Name:multinode-931000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-931000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-931000 status -v=7 --alsologtostderr: exit status 7 (71.324875ms)

                                                
                                                
-- stdout --
	multinode-931000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:03:17.830990    8098 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:03:17.831210    8098 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:17.831214    8098 out.go:304] Setting ErrFile to fd 2...
	I0717 11:03:17.831218    8098 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:17.831394    8098 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:03:17.831547    8098 out.go:298] Setting JSON to false
	I0717 11:03:17.831560    8098 mustload.go:65] Loading cluster: multinode-931000
	I0717 11:03:17.831593    8098 notify.go:220] Checking for updates...
	I0717 11:03:17.831822    8098 config.go:182] Loaded profile config "multinode-931000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:03:17.831830    8098 status.go:255] checking status of multinode-931000 ...
	I0717 11:03:17.832141    8098 status.go:330] multinode-931000 host status = "Stopped" (err=<nil>)
	I0717 11:03:17.832146    8098 status.go:343] host is not running, skipping remaining checks
	I0717 11:03:17.832149    8098 status.go:257] multinode-931000 status: &{Name:multinode-931000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-931000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-931000 status -v=7 --alsologtostderr: exit status 7 (73.230417ms)

                                                
                                                
-- stdout --
	multinode-931000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:03:32.941376    8103 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:03:32.941604    8103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:32.941608    8103 out.go:304] Setting ErrFile to fd 2...
	I0717 11:03:32.941612    8103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:32.941802    8103 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:03:32.941977    8103 out.go:298] Setting JSON to false
	I0717 11:03:32.941991    8103 mustload.go:65] Loading cluster: multinode-931000
	I0717 11:03:32.942037    8103 notify.go:220] Checking for updates...
	I0717 11:03:32.942245    8103 config.go:182] Loaded profile config "multinode-931000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:03:32.942253    8103 status.go:255] checking status of multinode-931000 ...
	I0717 11:03:32.942549    8103 status.go:330] multinode-931000 host status = "Stopped" (err=<nil>)
	I0717 11:03:32.942554    8103 status.go:343] host is not running, skipping remaining checks
	I0717 11:03:32.942557    8103 status.go:257] multinode-931000 status: &{Name:multinode-931000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-931000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-931000 status -v=7 --alsologtostderr: exit status 7 (74.816916ms)

                                                
                                                
-- stdout --
	multinode-931000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:03:53.644592    8106 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:03:53.644822    8106 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:53.644827    8106 out.go:304] Setting ErrFile to fd 2...
	I0717 11:03:53.644829    8106 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:53.645020    8106 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:03:53.645206    8106 out.go:298] Setting JSON to false
	I0717 11:03:53.645219    8106 mustload.go:65] Loading cluster: multinode-931000
	I0717 11:03:53.645266    8106 notify.go:220] Checking for updates...
	I0717 11:03:53.645495    8106 config.go:182] Loaded profile config "multinode-931000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:03:53.645505    8106 status.go:255] checking status of multinode-931000 ...
	I0717 11:03:53.645793    8106 status.go:330] multinode-931000 host status = "Stopped" (err=<nil>)
	I0717 11:03:53.645798    8106 status.go:343] host is not running, skipping remaining checks
	I0717 11:03:53.645801    8106 status.go:257] multinode-931000 status: &{Name:multinode-931000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-931000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000: exit status 7 (32.9035ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-931000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (53.41s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-931000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-931000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-931000: (1.763266833s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-931000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-931000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.211058125s)

                                                
                                                
-- stdout --
	* [multinode-931000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-931000" primary control-plane node in "multinode-931000" cluster
	* Restarting existing qemu2 VM for "multinode-931000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-931000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:03:55.533112    8122 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:03:55.533301    8122 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:55.533306    8122 out.go:304] Setting ErrFile to fd 2...
	I0717 11:03:55.533309    8122 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:03:55.533471    8122 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:03:55.534633    8122 out.go:298] Setting JSON to false
	I0717 11:03:55.553568    8122 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5607,"bootTime":1721233828,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 11:03:55.553646    8122 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:03:55.558700    8122 out.go:177] * [multinode-931000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:03:55.565693    8122 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 11:03:55.565738    8122 notify.go:220] Checking for updates...
	I0717 11:03:55.570878    8122 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:03:55.573576    8122 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:03:55.576596    8122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:03:55.577759    8122 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 11:03:55.580562    8122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:03:55.583929    8122 config.go:182] Loaded profile config "multinode-931000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:03:55.583991    8122 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:03:55.588398    8122 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 11:03:55.595578    8122 start.go:297] selected driver: qemu2
	I0717 11:03:55.595585    8122 start.go:901] validating driver "qemu2" against &{Name:multinode-931000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:multinode-931000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:03:55.595644    8122 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:03:55.597948    8122 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:03:55.597998    8122 cni.go:84] Creating CNI manager for ""
	I0717 11:03:55.598003    8122 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0717 11:03:55.598045    8122 start.go:340] cluster config:
	{Name:multinode-931000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-931000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:03:55.601614    8122 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:03:55.609588    8122 out.go:177] * Starting "multinode-931000" primary control-plane node in "multinode-931000" cluster
	I0717 11:03:55.613604    8122 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:03:55.613625    8122 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:03:55.613637    8122 cache.go:56] Caching tarball of preloaded images
	I0717 11:03:55.613724    8122 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:03:55.613730    8122 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:03:55.613786    8122 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/multinode-931000/config.json ...
	I0717 11:03:55.614241    8122 start.go:360] acquireMachinesLock for multinode-931000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:03:55.614275    8122 start.go:364] duration metric: took 28.416µs to acquireMachinesLock for "multinode-931000"
	I0717 11:03:55.614284    8122 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:03:55.614290    8122 fix.go:54] fixHost starting: 
	I0717 11:03:55.614404    8122 fix.go:112] recreateIfNeeded on multinode-931000: state=Stopped err=<nil>
	W0717 11:03:55.614413    8122 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:03:55.622606    8122 out.go:177] * Restarting existing qemu2 VM for "multinode-931000" ...
	I0717 11:03:55.625556    8122 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:03:55.625601    8122 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:72:e4:14:c1:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/disk.qcow2
	I0717 11:03:55.627710    8122 main.go:141] libmachine: STDOUT: 
	I0717 11:03:55.627730    8122 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:03:55.627761    8122 fix.go:56] duration metric: took 13.470916ms for fixHost
	I0717 11:03:55.627770    8122 start.go:83] releasing machines lock for "multinode-931000", held for 13.485667ms
	W0717 11:03:55.627778    8122 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:03:55.627811    8122 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:03:55.627816    8122 start.go:729] Will try again in 5 seconds ...
	I0717 11:04:00.630030    8122 start.go:360] acquireMachinesLock for multinode-931000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:04:00.630542    8122 start.go:364] duration metric: took 393.167µs to acquireMachinesLock for "multinode-931000"
	I0717 11:04:00.630689    8122 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:04:00.630712    8122 fix.go:54] fixHost starting: 
	I0717 11:04:00.631481    8122 fix.go:112] recreateIfNeeded on multinode-931000: state=Stopped err=<nil>
	W0717 11:04:00.631509    8122 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:04:00.636042    8122 out.go:177] * Restarting existing qemu2 VM for "multinode-931000" ...
	I0717 11:04:00.640032    8122 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:04:00.640322    8122 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:72:e4:14:c1:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/disk.qcow2
	I0717 11:04:00.650079    8122 main.go:141] libmachine: STDOUT: 
	I0717 11:04:00.650154    8122 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:04:00.650243    8122 fix.go:56] duration metric: took 19.534792ms for fixHost
	I0717 11:04:00.650263    8122 start.go:83] releasing machines lock for "multinode-931000", held for 19.694333ms
	W0717 11:04:00.650482    8122 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-931000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-931000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:04:00.658041    8122 out.go:177] 
	W0717 11:04:00.662010    8122 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:04:00.662029    8122 out.go:239] * 
	* 
	W0717 11:04:00.663906    8122 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:04:00.672049    8122 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-931000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-931000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000: exit status 7 (32.93525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-931000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-931000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-931000 node delete m03: exit status 83 (41.408333ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-931000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-931000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-931000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-931000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-931000 status --alsologtostderr: exit status 7 (30.126041ms)

                                                
                                                
-- stdout --
	multinode-931000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:04:00.856998    8136 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:04:00.857159    8136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:04:00.857163    8136 out.go:304] Setting ErrFile to fd 2...
	I0717 11:04:00.857165    8136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:04:00.857318    8136 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:04:00.857433    8136 out.go:298] Setting JSON to false
	I0717 11:04:00.857451    8136 mustload.go:65] Loading cluster: multinode-931000
	I0717 11:04:00.857492    8136 notify.go:220] Checking for updates...
	I0717 11:04:00.857632    8136 config.go:182] Loaded profile config "multinode-931000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:04:00.857639    8136 status.go:255] checking status of multinode-931000 ...
	I0717 11:04:00.857857    8136 status.go:330] multinode-931000 host status = "Stopped" (err=<nil>)
	I0717 11:04:00.857860    8136 status.go:343] host is not running, skipping remaining checks
	I0717 11:04:00.857862    8136 status.go:257] multinode-931000 status: &{Name:multinode-931000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-931000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000: exit status 7 (29.506125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-931000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (1.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-931000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-931000 stop: (1.760893167s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-931000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-931000 status: exit status 7 (65.492625ms)

                                                
                                                
-- stdout --
	multinode-931000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-931000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-931000 status --alsologtostderr: exit status 7 (32.260584ms)

                                                
                                                
-- stdout --
	multinode-931000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:04:02.745805    8152 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:04:02.745943    8152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:04:02.745947    8152 out.go:304] Setting ErrFile to fd 2...
	I0717 11:04:02.745949    8152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:04:02.746087    8152 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:04:02.746214    8152 out.go:298] Setting JSON to false
	I0717 11:04:02.746224    8152 mustload.go:65] Loading cluster: multinode-931000
	I0717 11:04:02.746288    8152 notify.go:220] Checking for updates...
	I0717 11:04:02.746439    8152 config.go:182] Loaded profile config "multinode-931000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:04:02.746446    8152 status.go:255] checking status of multinode-931000 ...
	I0717 11:04:02.746648    8152 status.go:330] multinode-931000 host status = "Stopped" (err=<nil>)
	I0717 11:04:02.746652    8152 status.go:343] host is not running, skipping remaining checks
	I0717 11:04:02.746655    8152 status.go:257] multinode-931000 status: &{Name:multinode-931000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-931000 status --alsologtostderr": multinode-931000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-931000 status --alsologtostderr": multinode-931000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000: exit status 7 (30.109416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-931000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (1.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-931000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-931000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.180668583s)

                                                
                                                
-- stdout --
	* [multinode-931000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-931000" primary control-plane node in "multinode-931000" cluster
	* Restarting existing qemu2 VM for "multinode-931000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-931000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:04:02.804943    8156 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:04:02.805087    8156 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:04:02.805090    8156 out.go:304] Setting ErrFile to fd 2...
	I0717 11:04:02.805092    8156 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:04:02.805222    8156 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:04:02.806195    8156 out.go:298] Setting JSON to false
	I0717 11:04:02.822336    8156 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5614,"bootTime":1721233828,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 11:04:02.822401    8156 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:04:02.826268    8156 out.go:177] * [multinode-931000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:04:02.834020    8156 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 11:04:02.834075    8156 notify.go:220] Checking for updates...
	I0717 11:04:02.841089    8156 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:04:02.843987    8156 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:04:02.847052    8156 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:04:02.850011    8156 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 11:04:02.853013    8156 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:04:02.856308    8156 config.go:182] Loaded profile config "multinode-931000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:04:02.856558    8156 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:04:02.860880    8156 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 11:04:02.868059    8156 start.go:297] selected driver: qemu2
	I0717 11:04:02.868067    8156 start.go:901] validating driver "qemu2" against &{Name:multinode-931000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:multinode-931000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:04:02.868135    8156 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:04:02.870445    8156 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:04:02.870469    8156 cni.go:84] Creating CNI manager for ""
	I0717 11:04:02.870476    8156 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0717 11:04:02.870519    8156 start.go:340] cluster config:
	{Name:multinode-931000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-931000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:04:02.873897    8156 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:04:02.880960    8156 out.go:177] * Starting "multinode-931000" primary control-plane node in "multinode-931000" cluster
	I0717 11:04:02.884845    8156 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:04:02.884863    8156 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:04:02.884874    8156 cache.go:56] Caching tarball of preloaded images
	I0717 11:04:02.884948    8156 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:04:02.884955    8156 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:04:02.885020    8156 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/multinode-931000/config.json ...
	I0717 11:04:02.885485    8156 start.go:360] acquireMachinesLock for multinode-931000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:04:02.885515    8156 start.go:364] duration metric: took 23.416µs to acquireMachinesLock for "multinode-931000"
	I0717 11:04:02.885524    8156 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:04:02.885531    8156 fix.go:54] fixHost starting: 
	I0717 11:04:02.885660    8156 fix.go:112] recreateIfNeeded on multinode-931000: state=Stopped err=<nil>
	W0717 11:04:02.885673    8156 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:04:02.890038    8156 out.go:177] * Restarting existing qemu2 VM for "multinode-931000" ...
	I0717 11:04:02.897942    8156 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:04:02.897976    8156 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:72:e4:14:c1:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/disk.qcow2
	I0717 11:04:02.899865    8156 main.go:141] libmachine: STDOUT: 
	I0717 11:04:02.899884    8156 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:04:02.899913    8156 fix.go:56] duration metric: took 14.3825ms for fixHost
	I0717 11:04:02.899917    8156 start.go:83] releasing machines lock for "multinode-931000", held for 14.396916ms
	W0717 11:04:02.899923    8156 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:04:02.899953    8156 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:04:02.899958    8156 start.go:729] Will try again in 5 seconds ...
	I0717 11:04:07.901261    8156 start.go:360] acquireMachinesLock for multinode-931000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:04:07.901803    8156 start.go:364] duration metric: took 390.75µs to acquireMachinesLock for "multinode-931000"
	I0717 11:04:07.901944    8156 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:04:07.901965    8156 fix.go:54] fixHost starting: 
	I0717 11:04:07.902750    8156 fix.go:112] recreateIfNeeded on multinode-931000: state=Stopped err=<nil>
	W0717 11:04:07.902778    8156 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:04:07.911168    8156 out.go:177] * Restarting existing qemu2 VM for "multinode-931000" ...
	I0717 11:04:07.915129    8156 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:04:07.915340    8156 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:72:e4:14:c1:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/multinode-931000/disk.qcow2
	I0717 11:04:07.923880    8156 main.go:141] libmachine: STDOUT: 
	I0717 11:04:07.923960    8156 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:04:07.924021    8156 fix.go:56] duration metric: took 22.058042ms for fixHost
	I0717 11:04:07.924036    8156 start.go:83] releasing machines lock for "multinode-931000", held for 22.2015ms
	W0717 11:04:07.924184    8156 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-931000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-931000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:04:07.931184    8156 out.go:177] 
	W0717 11:04:07.935211    8156 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:04:07.935262    8156 out.go:239] * 
	* 
	W0717 11:04:07.938133    8156 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:04:07.945167    8156 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-931000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000: exit status 7 (67.097125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-931000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-931000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-931000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-931000-m01 --driver=qemu2 : exit status 80 (9.787672333s)

                                                
                                                
-- stdout --
	* [multinode-931000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-931000-m01" primary control-plane node in "multinode-931000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-931000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-931000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-931000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-931000-m02 --driver=qemu2 : exit status 80 (10.11885675s)

                                                
                                                
-- stdout --
	* [multinode-931000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-931000-m02" primary control-plane node in "multinode-931000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-931000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-931000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-931000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-931000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-931000: exit status 83 (81.932625ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-931000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-931000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-931000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-931000 -n multinode-931000: exit status 7 (30.433125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-931000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.13s)

                                                
                                    
x
+
TestPreload (9.89s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-396000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-396000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.737131875s)

                                                
                                                
-- stdout --
	* [test-preload-396000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-396000" primary control-plane node in "test-preload-396000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-396000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:04:28.291089    8215 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:04:28.291231    8215 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:04:28.291234    8215 out.go:304] Setting ErrFile to fd 2...
	I0717 11:04:28.291241    8215 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:04:28.291371    8215 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:04:28.292434    8215 out.go:298] Setting JSON to false
	I0717 11:04:28.308485    8215 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5640,"bootTime":1721233828,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 11:04:28.308555    8215 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:04:28.315239    8215 out.go:177] * [test-preload-396000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:04:28.322393    8215 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 11:04:28.322445    8215 notify.go:220] Checking for updates...
	I0717 11:04:28.329327    8215 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:04:28.332336    8215 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:04:28.333851    8215 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:04:28.337311    8215 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 11:04:28.340401    8215 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:04:28.343751    8215 config.go:182] Loaded profile config "multinode-931000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:04:28.343806    8215 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:04:28.348323    8215 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:04:28.355402    8215 start.go:297] selected driver: qemu2
	I0717 11:04:28.355409    8215 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:04:28.355416    8215 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:04:28.357729    8215 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:04:28.360246    8215 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:04:28.363478    8215 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:04:28.363497    8215 cni.go:84] Creating CNI manager for ""
	I0717 11:04:28.363504    8215 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:04:28.363508    8215 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 11:04:28.363538    8215 start.go:340] cluster config:
	{Name:test-preload-396000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:04:28.367250    8215 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:04:28.375325    8215 out.go:177] * Starting "test-preload-396000" primary control-plane node in "test-preload-396000" cluster
	I0717 11:04:28.379321    8215 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0717 11:04:28.379405    8215 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/test-preload-396000/config.json ...
	I0717 11:04:28.379433    8215 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/test-preload-396000/config.json: {Name:mk7ceb78d1a3e09b49bdde3ea7e9f01471068319 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:04:28.379426    8215 cache.go:107] acquiring lock: {Name:mkad187a4eaa691c903f508696b7f1cd0599d430 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:04:28.379438    8215 cache.go:107] acquiring lock: {Name:mk92f31d2579be92bf794d7c3e53f3a3268b6dd7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:04:28.379461    8215 cache.go:107] acquiring lock: {Name:mk767216a4300966e9e72ef3554728bbdb18b426 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:04:28.379437    8215 cache.go:107] acquiring lock: {Name:mkd16abaefe44aae704eeea1a8ced9125eb116d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:04:28.379547    8215 cache.go:107] acquiring lock: {Name:mkd4a58de682130d851ca5577fe983fd8dbffd11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:04:28.379609    8215 cache.go:107] acquiring lock: {Name:mk909617110df5f33b97cb328d65a18c93d0a20e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:04:28.379613    8215 cache.go:107] acquiring lock: {Name:mk32fdc8bdb34f830b25a5b94b189e82017f8e21 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:04:28.379795    8215 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0717 11:04:28.379804    8215 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0717 11:04:28.379795    8215 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0717 11:04:28.379821    8215 cache.go:107] acquiring lock: {Name:mk197fb85d25eca2d6d70158a36d625423e91a25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:04:28.379938    8215 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:04:28.379954    8215 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0717 11:04:28.379971    8215 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:04:28.380011    8215 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0717 11:04:28.380036    8215 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0717 11:04:28.380075    8215 start.go:360] acquireMachinesLock for test-preload-396000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:04:28.380113    8215 start.go:364] duration metric: took 26.958µs to acquireMachinesLock for "test-preload-396000"
	I0717 11:04:28.380124    8215 start.go:93] Provisioning new machine with config: &{Name:test-preload-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:04:28.380169    8215 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:04:28.387345    8215 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 11:04:28.391497    8215 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0717 11:04:28.391525    8215 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0717 11:04:28.391594    8215 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0717 11:04:28.391677    8215 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:04:28.392140    8215 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0717 11:04:28.393769    8215 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0717 11:04:28.393793    8215 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0717 11:04:28.393800    8215 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:04:28.404667    8215 start.go:159] libmachine.API.Create for "test-preload-396000" (driver="qemu2")
	I0717 11:04:28.404698    8215 client.go:168] LocalClient.Create starting
	I0717 11:04:28.404807    8215 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:04:28.404840    8215 main.go:141] libmachine: Decoding PEM data...
	I0717 11:04:28.404849    8215 main.go:141] libmachine: Parsing certificate...
	I0717 11:04:28.404895    8215 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:04:28.404918    8215 main.go:141] libmachine: Decoding PEM data...
	I0717 11:04:28.404924    8215 main.go:141] libmachine: Parsing certificate...
	I0717 11:04:28.405349    8215 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:04:28.545713    8215 main.go:141] libmachine: Creating SSH key...
	I0717 11:04:28.611644    8215 main.go:141] libmachine: Creating Disk image...
	I0717 11:04:28.611663    8215 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:04:28.611881    8215 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/test-preload-396000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/test-preload-396000/disk.qcow2
	I0717 11:04:28.622036    8215 main.go:141] libmachine: STDOUT: 
	I0717 11:04:28.622053    8215 main.go:141] libmachine: STDERR: 
	I0717 11:04:28.622117    8215 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/test-preload-396000/disk.qcow2 +20000M
	I0717 11:04:28.631437    8215 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:04:28.631460    8215 main.go:141] libmachine: STDERR: 
	I0717 11:04:28.631475    8215 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/test-preload-396000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/test-preload-396000/disk.qcow2
	I0717 11:04:28.631479    8215 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:04:28.631491    8215 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:04:28.631515    8215 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/test-preload-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/test-preload-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/test-preload-396000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:b2:31:bd:f8:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/test-preload-396000/disk.qcow2
	I0717 11:04:28.633296    8215 main.go:141] libmachine: STDOUT: 
	I0717 11:04:28.633311    8215 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:04:28.633327    8215 client.go:171] duration metric: took 228.625042ms to LocalClient.Create
	I0717 11:04:28.808115    8215 cache.go:162] opening:  /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0717 11:04:28.826349    8215 cache.go:162] opening:  /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0717 11:04:28.834862    8215 cache.go:162] opening:  /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0717 11:04:28.850884    8215 cache.go:162] opening:  /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0717 11:04:28.862293    8215 cache.go:162] opening:  /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0717 11:04:28.879972    8215 cache.go:162] opening:  /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0717 11:04:28.950883    8215 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0717 11:04:28.950974    8215 cache.go:162] opening:  /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0717 11:04:28.976145    8215 cache.go:157] /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0717 11:04:28.976190    8215 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 596.720417ms
	I0717 11:04:28.976235    8215 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0717 11:04:29.115991    8215 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0717 11:04:29.116082    8215 cache.go:162] opening:  /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 11:04:29.311397    8215 cache.go:157] /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 11:04:29.311447    8215 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 932.003583ms
	I0717 11:04:29.311490    8215 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 11:04:30.633575    8215 start.go:128] duration metric: took 2.253367958s to createHost
	I0717 11:04:30.633633    8215 start.go:83] releasing machines lock for "test-preload-396000", held for 2.253508416s
	W0717 11:04:30.633700    8215 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:04:30.644563    8215 out.go:177] * Deleting "test-preload-396000" in qemu2 ...
	W0717 11:04:30.671488    8215 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:04:30.671516    8215 start.go:729] Will try again in 5 seconds ...
	I0717 11:04:30.992671    8215 cache.go:157] /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0717 11:04:30.992740    8215 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.613145708s
	I0717 11:04:30.992771    8215 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0717 11:04:31.220723    8215 cache.go:157] /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0717 11:04:31.220772    8215 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 2.841335083s
	I0717 11:04:31.220797    8215 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0717 11:04:31.623953    8215 cache.go:157] /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0717 11:04:31.624002    8215 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.244465083s
	I0717 11:04:31.624071    8215 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0717 11:04:32.602457    8215 cache.go:157] /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0717 11:04:32.602521    8215 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.223092583s
	I0717 11:04:32.602547    8215 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0717 11:04:35.197608    8215 cache.go:157] /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0717 11:04:35.197686    8215 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.817819958s
	I0717 11:04:35.197718    8215 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0717 11:04:35.672102    8215 start.go:360] acquireMachinesLock for test-preload-396000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:04:35.672556    8215 start.go:364] duration metric: took 382.5µs to acquireMachinesLock for "test-preload-396000"
	I0717 11:04:35.672691    8215 start.go:93] Provisioning new machine with config: &{Name:test-preload-396000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-396000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:04:35.672918    8215 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:04:35.678614    8215 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 11:04:35.728168    8215 start.go:159] libmachine.API.Create for "test-preload-396000" (driver="qemu2")
	I0717 11:04:35.728222    8215 client.go:168] LocalClient.Create starting
	I0717 11:04:35.728362    8215 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:04:35.728426    8215 main.go:141] libmachine: Decoding PEM data...
	I0717 11:04:35.728443    8215 main.go:141] libmachine: Parsing certificate...
	I0717 11:04:35.728503    8215 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:04:35.728561    8215 main.go:141] libmachine: Decoding PEM data...
	I0717 11:04:35.728572    8215 main.go:141] libmachine: Parsing certificate...
	I0717 11:04:35.729063    8215 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:04:35.879774    8215 main.go:141] libmachine: Creating SSH key...
	I0717 11:04:35.930305    8215 main.go:141] libmachine: Creating Disk image...
	I0717 11:04:35.930312    8215 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:04:35.930516    8215 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/test-preload-396000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/test-preload-396000/disk.qcow2
	I0717 11:04:35.939783    8215 main.go:141] libmachine: STDOUT: 
	I0717 11:04:35.939803    8215 main.go:141] libmachine: STDERR: 
	I0717 11:04:35.939842    8215 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/test-preload-396000/disk.qcow2 +20000M
	I0717 11:04:35.947849    8215 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:04:35.947863    8215 main.go:141] libmachine: STDERR: 
	I0717 11:04:35.947874    8215 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/test-preload-396000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/test-preload-396000/disk.qcow2
	I0717 11:04:35.947879    8215 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:04:35.947888    8215 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:04:35.947930    8215 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/test-preload-396000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/test-preload-396000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/test-preload-396000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:99:13:9a:90:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/test-preload-396000/disk.qcow2
	I0717 11:04:35.949646    8215 main.go:141] libmachine: STDOUT: 
	I0717 11:04:35.949662    8215 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:04:35.949674    8215 client.go:171] duration metric: took 221.446625ms to LocalClient.Create
	I0717 11:04:37.628200    8215 cache.go:157] /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0717 11:04:37.628278    8215 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.248645584s
	I0717 11:04:37.628310    8215 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0717 11:04:37.628352    8215 cache.go:87] Successfully saved all images to host disk.
	I0717 11:04:37.951869    8215 start.go:128] duration metric: took 2.278921125s to createHost
	I0717 11:04:37.951916    8215 start.go:83] releasing machines lock for "test-preload-396000", held for 2.279334459s
	W0717 11:04:37.952284    8215 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-396000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-396000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:04:37.965688    8215 out.go:177] 
	W0717 11:04:37.970824    8215 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:04:37.970848    8215 out.go:239] * 
	* 
	W0717 11:04:37.973558    8215 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:04:37.985671    8215 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-396000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-17 11:04:38.003699 -0700 PDT m=+660.648950209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-396000 -n test-preload-396000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-396000 -n test-preload-396000: exit status 7 (68.865ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-396000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-396000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-396000
--- FAIL: TestPreload (9.89s)

                                                
                                    
x
+
TestScheduledStopUnix (9.89s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-606000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-606000 --memory=2048 --driver=qemu2 : exit status 80 (9.742319375s)

                                                
                                                
-- stdout --
	* [scheduled-stop-606000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-606000" primary control-plane node in "scheduled-stop-606000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-606000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-606000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-606000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-606000" primary control-plane node in "scheduled-stop-606000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-606000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-606000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-17 11:04:47.894714 -0700 PDT m=+670.539950584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-606000 -n scheduled-stop-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-606000 -n scheduled-stop-606000: exit status 7 (68.836416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-606000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-606000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-606000
--- FAIL: TestScheduledStopUnix (9.89s)

                                                
                                    
x
+
TestSkaffold (12.22s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3500862066 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3500862066 version: (1.061002291s)
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-456000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-456000 --memory=2600 --driver=qemu2 : exit status 80 (9.827631208s)

                                                
                                                
-- stdout --
	* [skaffold-456000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-456000" primary control-plane node in "skaffold-456000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-456000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-456000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-456000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-456000" primary control-plane node in "skaffold-456000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-456000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-456000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-17 11:05:00.115037 -0700 PDT m=+682.760254834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-456000 -n skaffold-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-456000 -n skaffold-456000: exit status 7 (64.527417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-456000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-456000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-456000
--- FAIL: TestSkaffold (12.22s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (708.68s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1863991490 start -p running-upgrade-891000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1863991490 start -p running-upgrade-891000 --memory=2200 --vm-driver=qemu2 : (52.416918333s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-891000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-891000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m23.051968042s)

                                                
                                                
-- stdout --
	* [running-upgrade-891000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-891000" primary control-plane node in "running-upgrade-891000" cluster
	* Updating the running qemu2 "running-upgrade-891000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:06:34.198638    8606 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:06:34.198778    8606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:06:34.198781    8606 out.go:304] Setting ErrFile to fd 2...
	I0717 11:06:34.198784    8606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:06:34.198903    8606 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:06:34.199841    8606 out.go:298] Setting JSON to false
	I0717 11:06:34.215966    8606 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5766,"bootTime":1721233828,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 11:06:34.216039    8606 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:06:34.224140    8606 out.go:177] * [running-upgrade-891000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:06:34.230834    8606 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 11:06:34.230904    8606 notify.go:220] Checking for updates...
	I0717 11:06:34.237826    8606 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:06:34.241789    8606 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:06:34.244800    8606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:06:34.247767    8606 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 11:06:34.250839    8606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:06:34.254065    8606 config.go:182] Loaded profile config "running-upgrade-891000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:06:34.257751    8606 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 11:06:34.260782    8606 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:06:34.264786    8606 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 11:06:34.271814    8606 start.go:297] selected driver: qemu2
	I0717 11:06:34.271821    8606 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51302 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-891000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0717 11:06:34.271874    8606 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:06:34.274263    8606 cni.go:84] Creating CNI manager for ""
	I0717 11:06:34.274282    8606 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:06:34.274312    8606 start.go:340] cluster config:
	{Name:running-upgrade-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51302 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-891000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0717 11:06:34.274373    8606 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:06:34.281807    8606 out.go:177] * Starting "running-upgrade-891000" primary control-plane node in "running-upgrade-891000" cluster
	I0717 11:06:34.285828    8606 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0717 11:06:34.285856    8606 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0717 11:06:34.285866    8606 cache.go:56] Caching tarball of preloaded images
	I0717 11:06:34.285948    8606 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:06:34.285954    8606 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0717 11:06:34.286010    8606 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/running-upgrade-891000/config.json ...
	I0717 11:06:34.286369    8606 start.go:360] acquireMachinesLock for running-upgrade-891000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:06:34.286411    8606 start.go:364] duration metric: took 34.708µs to acquireMachinesLock for "running-upgrade-891000"
	I0717 11:06:34.286419    8606 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:06:34.286424    8606 fix.go:54] fixHost starting: 
	I0717 11:06:34.287001    8606 fix.go:112] recreateIfNeeded on running-upgrade-891000: state=Running err=<nil>
	W0717 11:06:34.287010    8606 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:06:34.292777    8606 out.go:177] * Updating the running qemu2 "running-upgrade-891000" VM ...
	I0717 11:06:34.303801    8606 machine.go:94] provisionDockerMachine start ...
	I0717 11:06:34.303858    8606 main.go:141] libmachine: Using SSH client type: native
	I0717 11:06:34.303977    8606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033129b0] 0x103315210 <nil>  [] 0s} localhost 51270 <nil> <nil>}
	I0717 11:06:34.303982    8606 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 11:06:34.374637    8606 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-891000
	
	I0717 11:06:34.374653    8606 buildroot.go:166] provisioning hostname "running-upgrade-891000"
	I0717 11:06:34.374692    8606 main.go:141] libmachine: Using SSH client type: native
	I0717 11:06:34.374799    8606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033129b0] 0x103315210 <nil>  [] 0s} localhost 51270 <nil> <nil>}
	I0717 11:06:34.374805    8606 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-891000 && echo "running-upgrade-891000" | sudo tee /etc/hostname
	I0717 11:06:34.449250    8606 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-891000
	
	I0717 11:06:34.449300    8606 main.go:141] libmachine: Using SSH client type: native
	I0717 11:06:34.449411    8606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033129b0] 0x103315210 <nil>  [] 0s} localhost 51270 <nil> <nil>}
	I0717 11:06:34.449419    8606 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-891000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-891000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-891000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 11:06:34.518187    8606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 11:06:34.518198    8606 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19282-6331/.minikube CaCertPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19282-6331/.minikube}
	I0717 11:06:34.518205    8606 buildroot.go:174] setting up certificates
	I0717 11:06:34.518210    8606 provision.go:84] configureAuth start
	I0717 11:06:34.518213    8606 provision.go:143] copyHostCerts
	I0717 11:06:34.518282    8606 exec_runner.go:144] found /Users/jenkins/minikube-integration/19282-6331/.minikube/ca.pem, removing ...
	I0717 11:06:34.518300    8606 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19282-6331/.minikube/ca.pem
	I0717 11:06:34.518422    8606 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19282-6331/.minikube/ca.pem (1078 bytes)
	I0717 11:06:34.518594    8606 exec_runner.go:144] found /Users/jenkins/minikube-integration/19282-6331/.minikube/cert.pem, removing ...
	I0717 11:06:34.518598    8606 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19282-6331/.minikube/cert.pem
	I0717 11:06:34.518647    8606 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19282-6331/.minikube/cert.pem (1123 bytes)
	I0717 11:06:34.518755    8606 exec_runner.go:144] found /Users/jenkins/minikube-integration/19282-6331/.minikube/key.pem, removing ...
	I0717 11:06:34.518758    8606 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19282-6331/.minikube/key.pem
	I0717 11:06:34.518807    8606 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19282-6331/.minikube/key.pem (1679 bytes)
	I0717 11:06:34.518908    8606 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-891000 san=[127.0.0.1 localhost minikube running-upgrade-891000]
	I0717 11:06:34.599385    8606 provision.go:177] copyRemoteCerts
	I0717 11:06:34.599434    8606 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 11:06:34.599443    8606 sshutil.go:53] new ssh client: &{IP:localhost Port:51270 SSHKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/running-upgrade-891000/id_rsa Username:docker}
	I0717 11:06:34.638050    8606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 11:06:34.644640    8606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 11:06:34.651638    8606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 11:06:34.658975    8606 provision.go:87] duration metric: took 140.756292ms to configureAuth
	I0717 11:06:34.658984    8606 buildroot.go:189] setting minikube options for container-runtime
	I0717 11:06:34.659098    8606 config.go:182] Loaded profile config "running-upgrade-891000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:06:34.659130    8606 main.go:141] libmachine: Using SSH client type: native
	I0717 11:06:34.659224    8606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033129b0] 0x103315210 <nil>  [] 0s} localhost 51270 <nil> <nil>}
	I0717 11:06:34.659229    8606 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 11:06:34.729658    8606 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 11:06:34.729667    8606 buildroot.go:70] root file system type: tmpfs
	I0717 11:06:34.729717    8606 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 11:06:34.729760    8606 main.go:141] libmachine: Using SSH client type: native
	I0717 11:06:34.729869    8606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033129b0] 0x103315210 <nil>  [] 0s} localhost 51270 <nil> <nil>}
	I0717 11:06:34.729903    8606 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 11:06:34.804952    8606 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 11:06:34.804999    8606 main.go:141] libmachine: Using SSH client type: native
	I0717 11:06:34.805116    8606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033129b0] 0x103315210 <nil>  [] 0s} localhost 51270 <nil> <nil>}
	I0717 11:06:34.805127    8606 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 11:06:34.875905    8606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 11:06:34.875915    8606 machine.go:97] duration metric: took 572.107791ms to provisionDockerMachine
	I0717 11:06:34.875921    8606 start.go:293] postStartSetup for "running-upgrade-891000" (driver="qemu2")
	I0717 11:06:34.875927    8606 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 11:06:34.875975    8606 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 11:06:34.875984    8606 sshutil.go:53] new ssh client: &{IP:localhost Port:51270 SSHKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/running-upgrade-891000/id_rsa Username:docker}
	I0717 11:06:34.914612    8606 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 11:06:34.915978    8606 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 11:06:34.915985    8606 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19282-6331/.minikube/addons for local assets ...
	I0717 11:06:34.916056    8606 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19282-6331/.minikube/files for local assets ...
	I0717 11:06:34.916177    8606 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19282-6331/.minikube/files/etc/ssl/certs/68202.pem -> 68202.pem in /etc/ssl/certs
	I0717 11:06:34.916302    8606 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 11:06:34.918963    8606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/files/etc/ssl/certs/68202.pem --> /etc/ssl/certs/68202.pem (1708 bytes)
	I0717 11:06:34.926042    8606 start.go:296] duration metric: took 50.115667ms for postStartSetup
	I0717 11:06:34.926056    8606 fix.go:56] duration metric: took 639.631375ms for fixHost
	I0717 11:06:34.926089    8606 main.go:141] libmachine: Using SSH client type: native
	I0717 11:06:34.926188    8606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033129b0] 0x103315210 <nil>  [] 0s} localhost 51270 <nil> <nil>}
	I0717 11:06:34.926194    8606 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 11:06:34.998786    8606 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721239594.796670513
	
	I0717 11:06:34.998796    8606 fix.go:216] guest clock: 1721239594.796670513
	I0717 11:06:34.998800    8606 fix.go:229] Guest: 2024-07-17 11:06:34.796670513 -0700 PDT Remote: 2024-07-17 11:06:34.926058 -0700 PDT m=+0.745553376 (delta=-129.387487ms)
	I0717 11:06:34.998815    8606 fix.go:200] guest clock delta is within tolerance: -129.387487ms
	I0717 11:06:34.998821    8606 start.go:83] releasing machines lock for "running-upgrade-891000", held for 712.401875ms
	I0717 11:06:34.998883    8606 ssh_runner.go:195] Run: cat /version.json
	I0717 11:06:34.998894    8606 sshutil.go:53] new ssh client: &{IP:localhost Port:51270 SSHKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/running-upgrade-891000/id_rsa Username:docker}
	I0717 11:06:34.998883    8606 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 11:06:34.998924    8606 sshutil.go:53] new ssh client: &{IP:localhost Port:51270 SSHKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/running-upgrade-891000/id_rsa Username:docker}
	W0717 11:06:34.999498    8606 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51270: connect: connection refused
	I0717 11:06:34.999523    8606 retry.go:31] will retry after 196.364833ms: dial tcp [::1]:51270: connect: connection refused
	W0717 11:06:35.237265    8606 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0717 11:06:35.237351    8606 ssh_runner.go:195] Run: systemctl --version
	I0717 11:06:35.239384    8606 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 11:06:35.241325    8606 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 11:06:35.241353    8606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0717 11:06:35.244448    8606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0717 11:06:35.248937    8606 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 11:06:35.248943    8606 start.go:495] detecting cgroup driver to use...
	I0717 11:06:35.249053    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 11:06:35.254317    8606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0717 11:06:35.257034    8606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 11:06:35.260043    8606 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 11:06:35.260065    8606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 11:06:35.263549    8606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 11:06:35.266663    8606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 11:06:35.269805    8606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 11:06:35.272666    8606 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 11:06:35.275893    8606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 11:06:35.278719    8606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 11:06:35.281680    8606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 11:06:35.284470    8606 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 11:06:35.287730    8606 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 11:06:35.290724    8606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:06:35.371563    8606 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 11:06:35.377781    8606 start.go:495] detecting cgroup driver to use...
	I0717 11:06:35.377841    8606 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 11:06:35.383315    8606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 11:06:35.388489    8606 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 11:06:35.396247    8606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 11:06:35.400844    8606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 11:06:35.405428    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 11:06:35.410939    8606 ssh_runner.go:195] Run: which cri-dockerd
	I0717 11:06:35.412323    8606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 11:06:35.415479    8606 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 11:06:35.420588    8606 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 11:06:35.514638    8606 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 11:06:35.607900    8606 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 11:06:35.607962    8606 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 11:06:35.613428    8606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:06:35.704316    8606 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 11:06:38.194702    8606 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.490366958s)
	I0717 11:06:38.194770    8606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 11:06:38.199296    8606 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0717 11:06:38.205480    8606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 11:06:38.210189    8606 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 11:06:38.299221    8606 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 11:06:38.384343    8606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:06:38.469565    8606 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 11:06:38.475133    8606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 11:06:38.479649    8606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:06:38.555573    8606 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 11:06:38.594197    8606 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 11:06:38.594271    8606 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 11:06:38.596701    8606 start.go:563] Will wait 60s for crictl version
	I0717 11:06:38.596744    8606 ssh_runner.go:195] Run: which crictl
	I0717 11:06:38.598347    8606 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 11:06:38.609757    8606 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0717 11:06:38.609818    8606 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 11:06:38.622369    8606 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 11:06:38.643414    8606 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0717 11:06:38.643560    8606 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0717 11:06:38.645049    8606 kubeadm.go:883] updating cluster {Name:running-upgrade-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51302 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-891000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0717 11:06:38.645092    8606 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0717 11:06:38.645137    8606 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 11:06:38.654964    8606 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 11:06:38.654981    8606 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0717 11:06:38.655020    8606 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 11:06:38.658496    8606 ssh_runner.go:195] Run: which lz4
	I0717 11:06:38.659796    8606 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 11:06:38.661004    8606 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 11:06:38.661015    8606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0717 11:06:39.595186    8606 docker.go:649] duration metric: took 935.419084ms to copy over tarball
	I0717 11:06:39.595239    8606 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 11:06:40.700865    8606 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.105610208s)
	I0717 11:06:40.700880    8606 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 11:06:40.716629    8606 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 11:06:40.720022    8606 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0717 11:06:40.725012    8606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:06:40.784082    8606 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 11:06:42.236173    8606 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4520745s)
	I0717 11:06:42.236255    8606 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 11:06:42.249928    8606 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 11:06:42.249937    8606 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0717 11:06:42.249942    8606 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 11:06:42.253774    8606 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:06:42.256680    8606 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:06:42.258956    8606 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0717 11:06:42.259161    8606 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:06:42.261076    8606 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0717 11:06:42.261145    8606 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:06:42.262848    8606 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0717 11:06:42.263000    8606 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:06:42.264292    8606 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0717 11:06:42.264331    8606 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:06:42.265553    8606 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:06:42.265629    8606 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:06:42.266785    8606 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:06:42.267113    8606 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:06:42.267761    8606 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:06:42.268757    8606 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:06:42.607854    8606 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:06:42.629745    8606 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0717 11:06:42.629767    8606 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:06:42.629828    8606 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:06:42.637622    8606 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0717 11:06:42.640080    8606 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0717 11:06:42.640752    8606 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0717 11:06:42.652721    8606 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0717 11:06:42.652741    8606 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0717 11:06:42.652796    8606 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0717 11:06:42.653956    8606 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0717 11:06:42.653970    8606 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0717 11:06:42.653997    8606 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0717 11:06:42.666439    8606 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0717 11:06:42.666446    8606 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0717 11:06:42.666554    8606 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0717 11:06:42.666554    8606 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0717 11:06:42.668385    8606 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0717 11:06:42.668397    8606 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0717 11:06:42.668397    8606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0717 11:06:42.668405    8606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0717 11:06:42.683544    8606 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:06:42.687495    8606 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:06:42.688792    8606 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0717 11:06:42.688804    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0717 11:06:42.710363    8606 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0717 11:06:42.710389    8606 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:06:42.710440    8606 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:06:42.722502    8606 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0717 11:06:42.722531    8606 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:06:42.722586    8606 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	W0717 11:06:42.730217    8606 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0717 11:06:42.730377    8606 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:06:42.750139    8606 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:06:42.783611    8606 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0717 11:06:42.783611    8606 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0717 11:06:42.807947    8606 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0717 11:06:42.807958    8606 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0717 11:06:42.807946    8606 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0717 11:06:42.807975    8606 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:06:42.807987    8606 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:06:42.808027    8606 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:06:42.808029    8606 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:06:42.845963    8606 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0717 11:06:42.846080    8606 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0717 11:06:42.854415    8606 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0717 11:06:42.861742    8606 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0717 11:06:42.861768    8606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0717 11:06:42.896319    8606 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0717 11:06:42.896430    8606 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:06:42.940208    8606 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0717 11:06:42.940234    8606 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:06:42.940292    8606 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:06:42.970991    8606 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0717 11:06:42.971021    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0717 11:06:43.018078    8606 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 11:06:43.018215    8606 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0717 11:06:43.054611    8606 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0717 11:06:43.054632    8606 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0717 11:06:43.054640    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0717 11:06:43.054640    8606 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0717 11:06:43.054667    8606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0717 11:06:43.221016    8606 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0717 11:06:43.221038    8606 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 11:06:43.221054    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0717 11:06:43.543798    8606 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 11:06:43.543840    8606 cache_images.go:92] duration metric: took 1.293889625s to LoadCachedImages
	W0717 11:06:43.543885    8606 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0717 11:06:43.543891    8606 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0717 11:06:43.543963    8606 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-891000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-891000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 11:06:43.544024    8606 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 11:06:43.573345    8606 cni.go:84] Creating CNI manager for ""
	I0717 11:06:43.573360    8606 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:06:43.573367    8606 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 11:06:43.573375    8606 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-891000 NodeName:running-upgrade-891000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 11:06:43.573440    8606 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-891000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 11:06:43.573494    8606 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0717 11:06:43.577505    8606 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 11:06:43.577555    8606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 11:06:43.584693    8606 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0717 11:06:43.590646    8606 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 11:06:43.598894    8606 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0717 11:06:43.604362    8606 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0717 11:06:43.607729    8606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:06:43.702954    8606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 11:06:43.707977    8606 certs.go:68] Setting up /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/running-upgrade-891000 for IP: 10.0.2.15
	I0717 11:06:43.707985    8606 certs.go:194] generating shared ca certs ...
	I0717 11:06:43.707993    8606 certs.go:226] acquiring lock for ca certs: {Name:mkc544d9d9a3de35c1f6cee821ec7cd5d08f6f6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:06:43.708552    8606 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/ca.key
	I0717 11:06:43.708613    8606 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/proxy-client-ca.key
	I0717 11:06:43.708619    8606 certs.go:256] generating profile certs ...
	I0717 11:06:43.708676    8606 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/running-upgrade-891000/client.key
	I0717 11:06:43.708687    8606 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/running-upgrade-891000/apiserver.key.fa121e4e
	I0717 11:06:43.708699    8606 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/running-upgrade-891000/apiserver.crt.fa121e4e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0717 11:06:43.777277    8606 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/running-upgrade-891000/apiserver.crt.fa121e4e ...
	I0717 11:06:43.777287    8606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/running-upgrade-891000/apiserver.crt.fa121e4e: {Name:mk1303d8e21584eada10a0316bf416104bff4639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:06:43.777533    8606 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/running-upgrade-891000/apiserver.key.fa121e4e ...
	I0717 11:06:43.777539    8606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/running-upgrade-891000/apiserver.key.fa121e4e: {Name:mk272552fb8d1aa7ad0128392cf047df277bbbab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:06:43.777663    8606 certs.go:381] copying /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/running-upgrade-891000/apiserver.crt.fa121e4e -> /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/running-upgrade-891000/apiserver.crt
	I0717 11:06:43.777816    8606 certs.go:385] copying /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/running-upgrade-891000/apiserver.key.fa121e4e -> /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/running-upgrade-891000/apiserver.key
	I0717 11:06:43.777972    8606 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/running-upgrade-891000/proxy-client.key
	I0717 11:06:43.778111    8606 certs.go:484] found cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/6820.pem (1338 bytes)
	W0717 11:06:43.778142    8606 certs.go:480] ignoring /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/6820_empty.pem, impossibly tiny 0 bytes
	I0717 11:06:43.778147    8606 certs.go:484] found cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 11:06:43.778166    8606 certs.go:484] found cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem (1078 bytes)
	I0717 11:06:43.778183    8606 certs.go:484] found cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem (1123 bytes)
	I0717 11:06:43.778200    8606 certs.go:484] found cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/key.pem (1679 bytes)
	I0717 11:06:43.778239    8606 certs.go:484] found cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/files/etc/ssl/certs/68202.pem (1708 bytes)
	I0717 11:06:43.778651    8606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 11:06:43.785823    8606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 11:06:43.792997    8606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 11:06:43.803346    8606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 11:06:43.811303    8606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/running-upgrade-891000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 11:06:43.822259    8606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/running-upgrade-891000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 11:06:43.828791    8606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/running-upgrade-891000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 11:06:43.839303    8606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/running-upgrade-891000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 11:06:43.845904    8606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/files/etc/ssl/certs/68202.pem --> /usr/share/ca-certificates/68202.pem (1708 bytes)
	I0717 11:06:43.855285    8606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 11:06:43.862495    8606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/6820.pem --> /usr/share/ca-certificates/6820.pem (1338 bytes)
	I0717 11:06:43.868757    8606 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 11:06:43.873457    8606 ssh_runner.go:195] Run: openssl version
	I0717 11:06:43.877350    8606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/68202.pem && ln -fs /usr/share/ca-certificates/68202.pem /etc/ssl/certs/68202.pem"
	I0717 11:06:43.881286    8606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/68202.pem
	I0717 11:06:43.882730    8606 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:54 /usr/share/ca-certificates/68202.pem
	I0717 11:06:43.882749    8606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/68202.pem
	I0717 11:06:43.884500    8606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/68202.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 11:06:43.887174    8606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 11:06:43.890286    8606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 11:06:43.892143    8606 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0717 11:06:43.892167    8606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 11:06:43.893994    8606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 11:06:43.896614    8606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6820.pem && ln -fs /usr/share/ca-certificates/6820.pem /etc/ssl/certs/6820.pem"
	I0717 11:06:43.899768    8606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6820.pem
	I0717 11:06:43.901409    8606 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:54 /usr/share/ca-certificates/6820.pem
	I0717 11:06:43.901428    8606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6820.pem
	I0717 11:06:43.903215    8606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6820.pem /etc/ssl/certs/51391683.0"
	I0717 11:06:43.906048    8606 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 11:06:43.907433    8606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 11:06:43.909275    8606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 11:06:43.911042    8606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 11:06:43.912859    8606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 11:06:43.914824    8606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 11:06:43.916543    8606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 11:06:43.918440    8606 kubeadm.go:392] StartCluster: {Name:running-upgrade-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51302 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-891000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0717 11:06:43.918504    8606 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 11:06:43.942328    8606 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 11:06:43.948547    8606 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 11:06:43.948555    8606 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 11:06:43.948593    8606 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 11:06:43.951790    8606 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 11:06:43.951827    8606 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-891000" does not appear in /Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:06:43.951841    8606 kubeconfig.go:62] /Users/jenkins/minikube-integration/19282-6331/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-891000" cluster setting kubeconfig missing "running-upgrade-891000" context setting]
	I0717 11:06:43.952021    8606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/kubeconfig: {Name:mk593058234481727c8f9c6b6ce8d5b26e4d4302 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:06:43.952923    8606 kapi.go:59] client config for running-upgrade-891000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/running-upgrade-891000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/running-upgrade-891000/client.key", CAFile:"/Users/jenkins/minikube-integration/19282-6331/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1046a7730), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 11:06:43.953762    8606 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 11:06:43.956420    8606 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-891000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0717 11:06:43.956425    8606 kubeadm.go:1160] stopping kube-system containers ...
	I0717 11:06:43.956465    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 11:06:43.967725    8606 docker.go:483] Stopping containers: [f006daa626f0 7695a18c1ee8 7bb9f0f8264c 61536ea4e843 da17848ef2db fd7f8a5743a1 532e66b427f5 53ceaff6c949 dcc0335fad73 df9cc50e6a7f d7a407351620 45eb3a19d32c 758d91fa570f e86e19946be2 3b34ba31e56f]
	I0717 11:06:43.967800    8606 ssh_runner.go:195] Run: docker stop f006daa626f0 7695a18c1ee8 7bb9f0f8264c 61536ea4e843 da17848ef2db fd7f8a5743a1 532e66b427f5 53ceaff6c949 dcc0335fad73 df9cc50e6a7f d7a407351620 45eb3a19d32c 758d91fa570f e86e19946be2 3b34ba31e56f
	I0717 11:06:44.996238    8606 ssh_runner.go:235] Completed: docker stop f006daa626f0 7695a18c1ee8 7bb9f0f8264c 61536ea4e843 da17848ef2db fd7f8a5743a1 532e66b427f5 53ceaff6c949 dcc0335fad73 df9cc50e6a7f d7a407351620 45eb3a19d32c 758d91fa570f e86e19946be2 3b34ba31e56f: (1.028419417s)
	I0717 11:06:44.996302    8606 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 11:06:45.062444    8606 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 11:06:45.066271    8606 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Jul 17 18:06 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Jul 17 18:06 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 17 18:06 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Jul 17 18:06 /etc/kubernetes/scheduler.conf
	
	I0717 11:06:45.066301    8606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/admin.conf
	I0717 11:06:45.073317    8606 kubeadm.go:163] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 11:06:45.073345    8606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 11:06:45.077316    8606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/kubelet.conf
	I0717 11:06:45.081294    8606 kubeadm.go:163] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 11:06:45.081318    8606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 11:06:45.085305    8606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/controller-manager.conf
	I0717 11:06:45.089336    8606 kubeadm.go:163] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 11:06:45.089357    8606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 11:06:45.093306    8606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/scheduler.conf
	I0717 11:06:45.102201    8606 kubeadm.go:163] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 11:06:45.102255    8606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 11:06:45.106718    8606 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 11:06:45.109366    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:06:45.143866    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:06:45.614962    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:06:45.808552    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:06:45.830385    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:06:45.851656    8606 api_server.go:52] waiting for apiserver process to appear ...
	I0717 11:06:45.851718    8606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:06:46.354068    8606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:06:46.853826    8606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:06:46.858191    8606 api_server.go:72] duration metric: took 1.006538875s to wait for apiserver process to appear ...
	I0717 11:06:46.858199    8606 api_server.go:88] waiting for apiserver healthz status ...
	I0717 11:06:46.858207    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:51.860366    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:51.860401    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:56.860706    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:56.860762    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:01.861519    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:01.861577    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:06.862768    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:06.862839    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:11.864112    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:11.864182    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:16.865875    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:16.865968    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:21.868038    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:21.868105    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:26.870535    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:26.870608    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:31.873335    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:31.873419    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:36.876096    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:36.876175    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:41.879284    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:41.879348    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:46.881998    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:46.882507    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:07:46.918583    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:07:46.918722    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:07:46.938082    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:07:46.938181    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:07:46.956645    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:07:46.956723    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:07:46.968201    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:07:46.968266    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:07:46.980511    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:07:46.980583    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:07:46.991038    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:07:46.991102    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:07:47.002645    8606 logs.go:276] 0 containers: []
	W0717 11:07:47.002660    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:07:47.002721    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:07:47.013066    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:07:47.013084    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:07:47.013089    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:07:47.025804    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:07:47.025816    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:07:47.094518    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:07:47.094532    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:07:47.108331    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:07:47.108345    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:07:47.119554    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:07:47.119567    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:07:47.135723    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:07:47.135734    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:07:47.154399    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:07:47.154412    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:07:47.166409    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:07:47.166425    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:07:47.192482    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:07:47.192492    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:07:47.196751    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:07:47.196760    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:07:47.212395    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:07:47.212407    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:07:47.227366    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:07:47.227379    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:07:47.239907    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:07:47.239918    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:07:47.253396    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:07:47.253408    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:07:47.265549    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:07:47.265561    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:07:47.302383    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:07:47.302393    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:07:49.819742    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:54.822239    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:54.822686    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:07:54.858840    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:07:54.858975    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:07:54.880640    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:07:54.880752    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:07:54.895297    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:07:54.895368    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:07:54.908398    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:07:54.908471    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:07:54.919618    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:07:54.919686    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:07:54.930335    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:07:54.930424    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:07:54.941893    8606 logs.go:276] 0 containers: []
	W0717 11:07:54.941904    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:07:54.941960    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:07:54.957299    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:07:54.957322    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:07:54.957327    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:07:54.972331    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:07:54.972344    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:07:54.984607    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:07:54.984628    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:07:54.996353    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:07:54.996364    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:07:55.012282    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:07:55.012292    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:07:55.024301    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:07:55.024314    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:07:55.036446    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:07:55.036458    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:07:55.062258    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:07:55.062269    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:07:55.099445    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:07:55.099457    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:07:55.114488    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:07:55.114503    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:07:55.129451    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:07:55.129461    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:07:55.140781    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:07:55.140792    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:07:55.152562    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:07:55.152576    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:07:55.190806    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:07:55.190813    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:07:55.195063    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:07:55.195068    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:07:55.210772    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:07:55.210783    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:07:57.730569    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:02.733018    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:02.733463    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:02.776629    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:08:02.776763    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:02.797518    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:08:02.797631    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:02.812101    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:08:02.812189    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:02.825624    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:08:02.825699    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:02.836930    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:08:02.836994    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:02.851277    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:08:02.851344    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:02.861253    8606 logs.go:276] 0 containers: []
	W0717 11:08:02.861264    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:02.861313    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:02.871712    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:08:02.871729    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:08:02.871735    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:08:02.886264    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:08:02.886273    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:08:02.898267    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:08:02.898279    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:08:02.915563    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:02.915576    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:02.919737    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:02.919743    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:02.957050    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:08:02.957060    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:08:02.971222    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:02.971233    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:02.996712    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:02.996719    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:03.035062    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:08:03.035070    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:08:03.051615    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:08:03.051629    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:08:03.067379    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:08:03.067392    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:03.079221    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:08:03.079233    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:08:03.090972    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:08:03.090982    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:08:03.102177    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:08:03.102191    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:08:03.114128    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:08:03.114141    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:08:03.125249    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:08:03.125261    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:08:05.638682    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:10.641386    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:10.641748    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:10.676257    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:08:10.676381    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:10.698938    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:08:10.699028    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:10.716019    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:08:10.716092    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:10.727229    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:08:10.727296    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:10.737712    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:08:10.737783    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:10.750636    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:08:10.750708    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:10.761339    8606 logs.go:276] 0 containers: []
	W0717 11:08:10.761350    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:10.761406    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:10.781769    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:08:10.781785    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:10.781790    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:10.820086    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:08:10.820098    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:08:10.831704    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:08:10.831714    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:08:10.851927    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:10.851939    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:10.877825    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:08:10.877834    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:10.890289    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:10.890302    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:10.894534    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:10.894541    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:10.929508    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:08:10.929519    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:08:10.941956    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:08:10.941969    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:08:10.957965    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:08:10.957978    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:08:10.973521    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:08:10.973531    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:08:10.987677    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:08:10.987685    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:08:10.999504    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:08:10.999513    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:08:11.014085    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:08:11.014096    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:08:11.025729    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:08:11.025743    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:08:11.043256    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:08:11.043265    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:08:13.558143    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:18.561026    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:18.561384    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:18.601253    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:08:18.601409    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:18.624778    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:08:18.624907    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:18.639311    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:08:18.639395    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:18.651789    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:08:18.651860    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:18.662409    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:08:18.662488    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:18.677782    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:08:18.677869    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:18.688555    8606 logs.go:276] 0 containers: []
	W0717 11:08:18.688572    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:18.688643    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:18.700400    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:08:18.700418    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:18.700423    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:18.740824    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:08:18.740836    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:08:18.755091    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:08:18.755100    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:08:18.770202    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:08:18.770212    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:08:18.787931    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:08:18.787941    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:08:18.802951    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:08:18.802963    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:08:18.814854    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:08:18.814871    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:08:18.829366    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:08:18.829377    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:08:18.840600    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:08:18.840611    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:08:18.852525    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:18.852535    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:18.877620    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:08:18.877631    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:18.889150    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:18.889161    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:18.893995    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:18.894002    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:18.927020    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:08:18.927031    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:08:18.939103    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:08:18.939113    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:08:18.952718    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:08:18.952728    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:08:21.469580    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:26.472468    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:26.472975    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:26.514631    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:08:26.514761    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:26.536209    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:08:26.536328    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:26.550702    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:08:26.550772    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:26.562646    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:08:26.562715    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:26.573363    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:08:26.573438    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:26.588910    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:08:26.588985    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:26.603793    8606 logs.go:276] 0 containers: []
	W0717 11:08:26.603803    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:26.603856    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:26.614608    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:08:26.614625    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:26.614631    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:26.653178    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:26.653186    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:26.687425    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:08:26.687441    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:08:26.701673    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:08:26.701683    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:08:26.713143    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:08:26.713156    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:08:26.726345    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:08:26.726359    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:08:26.742157    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:08:26.742168    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:08:26.754350    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:26.754361    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:26.758738    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:08:26.758746    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:08:26.774348    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:08:26.774360    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:08:26.786354    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:08:26.786367    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:08:26.803139    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:08:26.803150    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:08:26.820180    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:08:26.820190    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:26.831801    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:08:26.831813    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:08:26.846788    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:08:26.846797    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:08:26.859925    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:26.859938    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:29.386639    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:34.389493    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:34.389836    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:34.424181    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:08:34.424317    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:34.444776    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:08:34.444878    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:34.459049    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:08:34.459125    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:34.472180    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:08:34.472251    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:34.482454    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:08:34.482519    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:34.497999    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:08:34.498068    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:34.508093    8606 logs.go:276] 0 containers: []
	W0717 11:08:34.508109    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:34.508166    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:34.518427    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:08:34.518444    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:08:34.518450    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:08:34.532075    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:08:34.532088    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:08:34.545740    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:08:34.545755    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:08:34.561222    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:08:34.561236    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:08:34.579416    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:34.579429    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:34.605108    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:34.605118    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:34.609592    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:34.609600    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:34.643149    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:08:34.643161    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:34.655137    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:08:34.655151    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:08:34.671418    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:08:34.671432    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:08:34.683222    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:08:34.683233    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:08:34.698065    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:08:34.698078    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:08:34.715079    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:08:34.715089    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:08:34.727494    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:08:34.727507    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:08:34.738885    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:34.738896    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:34.777450    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:08:34.777463    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:08:37.291192    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:42.294091    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:42.294541    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:42.330655    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:08:42.330775    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:42.352243    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:08:42.352356    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:42.367823    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:08:42.367896    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:42.381776    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:08:42.381851    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:42.393075    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:08:42.393140    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:42.403858    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:08:42.403922    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:42.414202    8606 logs.go:276] 0 containers: []
	W0717 11:08:42.414216    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:42.414271    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:42.427019    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:08:42.427039    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:08:42.427044    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:08:42.438565    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:08:42.438579    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:08:42.458847    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:08:42.458857    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:08:42.470435    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:08:42.470445    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:42.481802    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:42.481814    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:42.517607    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:08:42.517615    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:08:42.532438    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:08:42.532452    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:08:42.543763    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:08:42.543774    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:08:42.561058    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:42.561069    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:42.586441    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:42.586448    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:42.590469    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:42.590478    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:42.625034    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:08:42.625048    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:08:42.646004    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:08:42.646016    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:08:42.663860    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:08:42.663870    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:08:42.685256    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:08:42.685270    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:08:42.699746    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:08:42.699761    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:08:45.214422    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:50.217100    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:50.217610    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:50.257787    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:08:50.257921    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:50.284953    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:08:50.285040    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:50.299423    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:08:50.299493    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:50.311213    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:08:50.311282    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:50.322087    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:08:50.322147    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:50.332596    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:08:50.332666    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:50.342675    8606 logs.go:276] 0 containers: []
	W0717 11:08:50.342686    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:50.342737    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:50.352543    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:08:50.352562    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:50.352568    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:50.357430    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:08:50.357439    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:08:50.374312    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:08:50.374321    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:08:50.385781    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:08:50.385792    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:50.397292    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:08:50.397305    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:08:50.411418    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:08:50.411431    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:08:50.422585    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:08:50.422595    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:08:50.434393    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:08:50.434405    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:08:50.449875    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:08:50.449885    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:08:50.461266    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:08:50.461278    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:08:50.472736    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:50.472746    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:50.498073    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:50.498080    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:50.535713    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:08:50.535721    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:08:50.549384    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:08:50.549395    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:08:50.565572    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:08:50.565583    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:08:50.579842    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:50.579855    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:53.134453    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:58.137090    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:58.137510    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:58.192688    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:08:58.192781    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:58.216479    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:08:58.216560    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:58.228662    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:08:58.228734    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:58.239417    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:08:58.239485    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:58.249803    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:08:58.249866    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:58.267104    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:08:58.267175    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:58.277599    8606 logs.go:276] 0 containers: []
	W0717 11:08:58.277611    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:58.277673    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:58.287953    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:08:58.287970    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:58.287977    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:58.324880    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:08:58.324888    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:08:58.336919    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:08:58.336931    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:58.348630    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:58.348644    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:58.352943    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:08:58.352953    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:08:58.366879    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:08:58.366890    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:08:58.383890    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:08:58.383899    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:08:58.395876    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:08:58.395888    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:08:58.409550    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:08:58.409561    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:08:58.423802    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:08:58.423812    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:08:58.439137    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:58.439147    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:58.474392    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:08:58.474404    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:08:58.487210    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:08:58.487219    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:08:58.498271    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:08:58.498283    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:08:58.509634    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:08:58.509648    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:08:58.526977    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:58.526987    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:01.051855    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:06.054131    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:06.054404    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:06.082697    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:09:06.082811    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:06.100408    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:09:06.100494    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:06.113742    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:09:06.113815    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:06.125617    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:09:06.125685    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:06.135987    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:09:06.136051    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:06.146356    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:09:06.146420    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:06.156360    8606 logs.go:276] 0 containers: []
	W0717 11:09:06.156372    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:06.156427    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:06.166690    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:09:06.166707    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:09:06.166712    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:09:06.181379    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:06.181392    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:06.206826    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:06.206837    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:06.242481    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:09:06.242495    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:09:06.256682    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:09:06.256693    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:09:06.270592    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:09:06.270605    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:09:06.282227    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:09:06.282237    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:09:06.293952    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:09:06.293962    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:09:06.311837    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:09:06.311847    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:09:06.323900    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:06.323914    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:06.360545    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:06.360552    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:06.365104    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:09:06.365130    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:09:06.377164    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:09:06.377177    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:09:06.392806    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:09:06.392815    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:09:06.404688    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:09:06.404698    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:09:06.415955    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:09:06.415967    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:08.930046    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:13.932283    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:13.932396    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:13.943890    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:09:13.943962    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:13.955661    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:09:13.955733    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:13.966562    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:09:13.966626    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:13.976803    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:09:13.976887    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:13.988322    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:09:13.988400    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:13.999310    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:09:13.999382    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:14.009872    8606 logs.go:276] 0 containers: []
	W0717 11:09:14.009884    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:14.009943    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:14.021136    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:09:14.021155    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:09:14.021161    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:09:14.033912    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:09:14.033924    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:09:14.051862    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:14.051873    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:14.057231    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:09:14.057242    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:09:14.081267    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:09:14.081283    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:09:14.093237    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:09:14.093249    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:09:14.105618    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:09:14.105628    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:09:14.121716    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:09:14.121726    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:09:14.135980    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:09:14.135993    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:09:14.148591    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:09:14.148601    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:09:14.162524    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:09:14.162533    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:09:14.174216    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:14.174227    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:14.211687    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:09:14.211698    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:09:14.223298    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:14.223311    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:14.248384    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:09:14.248402    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:14.260238    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:14.260251    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:16.801021    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:21.801697    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:21.801796    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:21.814027    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:09:21.814117    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:21.826690    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:09:21.826759    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:21.837627    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:09:21.837707    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:21.848821    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:09:21.848896    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:21.860329    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:09:21.860404    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:21.871412    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:09:21.871477    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:21.881770    8606 logs.go:276] 0 containers: []
	W0717 11:09:21.881784    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:21.881839    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:21.892388    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:09:21.892405    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:09:21.892411    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:09:21.916698    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:09:21.916708    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:09:21.929184    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:09:21.929194    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:09:21.946819    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:09:21.946829    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:21.959333    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:21.959344    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:21.994517    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:09:21.994531    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:09:22.009685    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:09:22.009694    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:09:22.021521    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:22.021533    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:22.060601    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:09:22.060613    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:09:22.074638    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:09:22.074652    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:09:22.090612    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:09:22.090626    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:09:22.102981    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:22.102993    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:22.127642    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:22.127651    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:22.132340    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:09:22.132349    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:09:22.144224    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:09:22.144238    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:09:22.162296    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:09:22.162309    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:09:24.676490    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:29.678789    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:29.679027    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:29.690575    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:09:29.690659    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:29.701089    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:09:29.701159    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:29.712093    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:09:29.712160    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:29.724211    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:09:29.724279    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:29.735004    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:09:29.735062    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:29.748632    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:09:29.748701    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:29.759379    8606 logs.go:276] 0 containers: []
	W0717 11:09:29.759392    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:29.759460    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:29.770625    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:09:29.770643    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:09:29.770649    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:09:29.785020    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:09:29.785031    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:09:29.800028    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:09:29.800039    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:09:29.811706    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:09:29.811719    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:09:29.824228    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:09:29.824239    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:09:29.840365    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:09:29.840375    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:09:29.856596    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:29.856608    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:29.861560    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:29.861567    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:29.885160    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:09:29.885168    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:29.896729    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:29.896743    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:29.931384    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:09:29.931395    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:09:29.943499    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:09:29.943516    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:09:29.961035    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:09:29.961044    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:09:29.972750    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:29.972761    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:30.011920    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:09:30.011930    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:09:30.029888    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:09:30.029899    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:09:32.544538    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:37.547016    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:37.547925    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:37.586745    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:09:37.586887    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:37.610343    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:09:37.610457    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:37.630464    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:09:37.630540    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:37.642167    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:09:37.642246    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:37.652988    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:09:37.653051    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:37.664998    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:09:37.665073    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:37.675637    8606 logs.go:276] 0 containers: []
	W0717 11:09:37.675650    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:37.675710    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:37.686102    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:09:37.686119    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:09:37.686124    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:09:37.697853    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:37.697865    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:37.702192    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:09:37.702202    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:09:37.716453    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:09:37.716464    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:09:37.737014    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:09:37.737026    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:09:37.754546    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:09:37.754556    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:09:37.774636    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:09:37.774646    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:09:37.786126    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:09:37.786139    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:37.797938    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:37.797952    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:37.832677    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:09:37.832687    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:09:37.850735    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:09:37.850747    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:09:37.866766    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:09:37.866778    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:09:37.878684    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:37.878694    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:37.902465    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:37.902473    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:37.939788    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:09:37.939797    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:09:37.951638    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:09:37.951649    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:09:40.474695    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:45.476723    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:45.476898    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:45.494685    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:09:45.494782    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:45.508741    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:09:45.508816    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:45.521740    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:09:45.521816    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:45.534861    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:09:45.534996    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:45.554093    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:09:45.554173    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:45.566352    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:09:45.566421    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:45.576529    8606 logs.go:276] 0 containers: []
	W0717 11:09:45.576540    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:45.576603    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:45.586897    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:09:45.586911    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:45.586917    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:45.623077    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:09:45.623090    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:09:45.637416    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:09:45.637427    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:09:45.654943    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:45.654955    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:45.678081    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:45.678090    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:45.682187    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:09:45.682196    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:09:45.694197    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:09:45.694210    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:09:45.709551    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:09:45.709564    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:09:45.724306    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:09:45.724317    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:09:45.738667    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:09:45.738676    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:45.752932    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:45.752943    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:45.792546    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:09:45.792557    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:09:45.806996    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:09:45.807006    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:09:45.818619    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:09:45.818635    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:09:45.830783    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:09:45.830795    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:09:45.842477    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:09:45.842489    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:09:48.355294    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:53.357507    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:53.357607    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:53.369097    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:09:53.369172    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:53.380283    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:09:53.380349    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:53.390856    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:09:53.390925    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:53.401315    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:09:53.401384    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:53.412526    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:09:53.412596    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:53.424329    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:09:53.424407    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:53.435885    8606 logs.go:276] 0 containers: []
	W0717 11:09:53.435898    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:53.435975    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:53.448716    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:09:53.448742    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:09:53.448749    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:53.461523    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:09:53.461536    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:09:53.476664    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:09:53.476677    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:09:53.489279    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:09:53.489292    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:09:53.502580    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:09:53.502593    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:09:53.516114    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:09:53.516129    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:09:53.528992    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:53.529004    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:53.572157    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:09:53.572170    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:09:53.589963    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:09:53.589977    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:09:53.608935    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:09:53.608953    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:09:53.622280    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:53.622295    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:53.660796    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:09:53.660811    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:09:53.678784    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:53.678799    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:53.703804    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:53.703817    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:53.709011    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:09:53.709019    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:09:53.723786    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:09:53.723798    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:09:56.240901    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:01.242619    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:01.242761    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:01.254568    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:10:01.254654    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:01.265399    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:10:01.265467    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:01.275984    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:10:01.276041    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:01.286242    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:10:01.286301    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:01.296491    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:10:01.296556    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:01.307409    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:10:01.307467    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:01.317310    8606 logs.go:276] 0 containers: []
	W0717 11:10:01.317324    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:01.317379    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:01.327613    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:10:01.327629    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:10:01.327634    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:10:01.339058    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:10:01.339071    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:10:01.353685    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:10:01.353697    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:10:01.365028    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:01.365041    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:01.388657    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:01.388667    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:01.393190    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:10:01.393197    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:10:01.408025    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:10:01.408035    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:10:01.422119    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:01.422129    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:01.456233    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:10:01.456243    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:10:01.467763    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:10:01.467774    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:10:01.483233    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:10:01.483243    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:10:01.495380    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:10:01.495391    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:10:01.512430    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:10:01.512441    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:01.523937    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:01.523951    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:01.561774    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:10:01.561785    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:10:01.572948    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:10:01.572959    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:10:04.089007    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:09.091297    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:09.091416    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:09.102631    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:10:09.102703    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:09.113376    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:10:09.113448    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:09.125702    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:10:09.125770    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:09.136937    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:10:09.137011    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:09.148070    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:10:09.148140    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:09.159647    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:10:09.159720    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:09.170986    8606 logs.go:276] 0 containers: []
	W0717 11:10:09.170998    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:09.171056    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:09.185829    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:10:09.185847    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:10:09.185853    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:10:09.198885    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:10:09.198901    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:10:09.214200    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:10:09.214213    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:10:09.231499    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:10:09.231511    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:10:09.247233    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:09.247243    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:09.270779    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:09.270795    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:09.309918    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:09.309931    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:09.347613    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:10:09.347626    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:10:09.363573    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:10:09.363588    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:10:09.376473    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:10:09.376484    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:10:09.389176    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:10:09.389188    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:10:09.400914    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:10:09.400926    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:09.412478    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:09.412489    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:09.416953    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:10:09.416962    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:10:09.432084    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:10:09.432094    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:10:09.443981    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:10:09.443993    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:10:11.963614    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:16.966211    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:16.966671    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:17.004116    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:10:17.004250    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:17.024584    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:10:17.024702    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:17.039658    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:10:17.039731    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:17.052296    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:10:17.052371    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:17.065639    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:10:17.065707    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:17.076187    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:10:17.076261    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:17.087029    8606 logs.go:276] 0 containers: []
	W0717 11:10:17.087039    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:17.087100    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:17.097484    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:10:17.097501    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:17.097506    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:17.135223    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:10:17.135230    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:10:17.149056    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:10:17.149067    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:10:17.164062    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:10:17.164074    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:10:17.176143    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:10:17.176155    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:10:17.197809    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:17.197819    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:17.202034    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:17.202040    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:17.236516    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:10:17.236528    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:10:17.248369    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:10:17.248379    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:10:17.263525    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:10:17.263539    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:10:17.282072    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:17.282083    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:17.308620    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:10:17.308631    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:10:17.323368    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:10:17.323381    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:10:17.337087    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:10:17.337100    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:10:17.356749    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:10:17.356762    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:10:17.368072    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:10:17.368083    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:19.882052    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:24.884316    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:24.884405    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:24.900313    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:10:24.900389    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:24.912020    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:10:24.912098    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:24.923343    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:10:24.923410    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:24.934953    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:10:24.935060    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:24.946000    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:10:24.946081    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:24.957728    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:10:24.957804    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:24.968697    8606 logs.go:276] 0 containers: []
	W0717 11:10:24.968709    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:24.968775    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:24.984693    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:10:24.984711    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:24.984718    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:25.027512    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:10:25.027524    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:10:25.042737    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:10:25.042749    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:10:25.059565    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:10:25.059576    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:10:25.074943    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:10:25.074957    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:10:25.093461    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:10:25.093483    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:10:25.106323    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:10:25.106335    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:25.119608    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:25.119621    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:25.124633    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:10:25.124648    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:10:25.137842    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:10:25.137853    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:10:25.151071    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:10:25.151086    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:10:25.166574    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:10:25.166588    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:10:25.179545    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:25.179556    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:25.203399    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:25.203413    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:25.242843    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:10:25.242863    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:10:25.260824    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:10:25.260837    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:10:27.778306    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:32.780550    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:32.780828    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:32.814397    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:10:32.814526    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:32.831043    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:10:32.831127    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:32.844485    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:10:32.844561    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:32.857317    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:10:32.857393    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:32.868089    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:10:32.868153    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:32.878274    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:10:32.878345    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:32.887970    8606 logs.go:276] 0 containers: []
	W0717 11:10:32.887985    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:32.888043    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:32.900350    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:10:32.900366    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:32.900371    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:32.904888    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:10:32.904895    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:10:32.916670    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:10:32.916680    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:10:32.932668    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:10:32.932680    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:10:32.944571    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:32.944584    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:32.982620    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:10:32.982628    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:10:32.994074    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:32.994086    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:33.017137    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:10:33.017147    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:33.028662    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:10:33.028673    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:10:33.040716    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:10:33.040728    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:10:33.052598    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:10:33.052609    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:10:33.067201    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:10:33.067212    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:10:33.081135    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:10:33.081147    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:10:33.096278    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:10:33.096290    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:10:33.117594    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:10:33.117604    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:10:33.135446    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:33.135457    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:35.676238    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:40.678523    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:40.678654    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:40.690163    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:10:40.690239    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:40.701251    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:10:40.701324    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:40.711497    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:10:40.711567    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:40.723293    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:10:40.723368    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:40.733675    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:10:40.733747    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:40.744046    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:10:40.744116    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:40.755013    8606 logs.go:276] 0 containers: []
	W0717 11:10:40.755026    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:40.755092    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:40.765724    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:10:40.765742    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:10:40.765747    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:10:40.783213    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:10:40.783224    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:10:40.794649    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:40.794664    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:40.817552    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:40.817567    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:40.856723    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:10:40.856740    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:10:40.867839    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:10:40.867852    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:10:40.881102    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:10:40.881113    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:10:40.895203    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:10:40.895220    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:10:40.911433    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:40.911443    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:40.916259    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:40.916266    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:40.951111    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:10:40.951126    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:10:40.969008    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:10:40.969024    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:10:40.982400    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:10:40.982416    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:10:40.993489    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:10:40.993503    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:10:41.005060    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:10:41.005074    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:10:41.016411    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:10:41.016422    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:43.530087    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:48.532406    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:48.532501    8606 kubeadm.go:597] duration metric: took 4m4.583568875s to restartPrimaryControlPlane
	W0717 11:10:48.532560    8606 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 11:10:48.532587    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0717 11:10:49.525594    8606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 11:10:49.530688    8606 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 11:10:49.533523    8606 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 11:10:49.536264    8606 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 11:10:49.536271    8606 kubeadm.go:157] found existing configuration files:
	
	I0717 11:10:49.536295    8606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/admin.conf
	I0717 11:10:49.539249    8606 kubeadm.go:163] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 11:10:49.539275    8606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 11:10:49.542341    8606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/kubelet.conf
	I0717 11:10:49.544919    8606 kubeadm.go:163] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 11:10:49.544940    8606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 11:10:49.547959    8606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/controller-manager.conf
	I0717 11:10:49.550968    8606 kubeadm.go:163] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 11:10:49.550995    8606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 11:10:49.553627    8606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/scheduler.conf
	I0717 11:10:49.555994    8606 kubeadm.go:163] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 11:10:49.556015    8606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 11:10:49.558978    8606 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 11:10:49.576405    8606 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0717 11:10:49.576447    8606 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 11:10:49.626579    8606 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 11:10:49.626644    8606 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 11:10:49.626704    8606 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 11:10:49.678639    8606 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 11:10:49.682671    8606 out.go:204]   - Generating certificates and keys ...
	I0717 11:10:49.682706    8606 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 11:10:49.682741    8606 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 11:10:49.682781    8606 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 11:10:49.682808    8606 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 11:10:49.682846    8606 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 11:10:49.682888    8606 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 11:10:49.682922    8606 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 11:10:49.682953    8606 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 11:10:49.682996    8606 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 11:10:49.683051    8606 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 11:10:49.683071    8606 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 11:10:49.683103    8606 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 11:10:49.718557    8606 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 11:10:49.961642    8606 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 11:10:50.004421    8606 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 11:10:50.119042    8606 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 11:10:50.153902    8606 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 11:10:50.154240    8606 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 11:10:50.154265    8606 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 11:10:50.243842    8606 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 11:10:50.246774    8606 out.go:204]   - Booting up control plane ...
	I0717 11:10:50.246815    8606 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 11:10:50.246846    8606 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 11:10:50.250223    8606 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 11:10:50.250271    8606 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 11:10:50.250443    8606 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 11:10:54.752391    8606 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501764 seconds
	I0717 11:10:54.752450    8606 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 11:10:54.756518    8606 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 11:10:55.282128    8606 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 11:10:55.282443    8606 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-891000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 11:10:55.791369    8606 kubeadm.go:310] [bootstrap-token] Using token: lrnl0s.nssll5otgr0d7k5c
	I0717 11:10:55.795111    8606 out.go:204]   - Configuring RBAC rules ...
	I0717 11:10:55.795187    8606 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 11:10:55.795251    8606 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 11:10:55.802801    8606 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 11:10:55.803705    8606 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 11:10:55.804591    8606 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 11:10:55.805518    8606 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 11:10:55.808644    8606 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 11:10:55.962782    8606 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 11:10:56.195305    8606 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 11:10:56.195773    8606 kubeadm.go:310] 
	I0717 11:10:56.195814    8606 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 11:10:56.195819    8606 kubeadm.go:310] 
	I0717 11:10:56.195868    8606 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 11:10:56.195873    8606 kubeadm.go:310] 
	I0717 11:10:56.195886    8606 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 11:10:56.195922    8606 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 11:10:56.195960    8606 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 11:10:56.195965    8606 kubeadm.go:310] 
	I0717 11:10:56.195991    8606 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 11:10:56.195996    8606 kubeadm.go:310] 
	I0717 11:10:56.196025    8606 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 11:10:56.196028    8606 kubeadm.go:310] 
	I0717 11:10:56.196050    8606 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 11:10:56.196094    8606 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 11:10:56.196127    8606 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 11:10:56.196132    8606 kubeadm.go:310] 
	I0717 11:10:56.196183    8606 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 11:10:56.196238    8606 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 11:10:56.196241    8606 kubeadm.go:310] 
	I0717 11:10:56.196289    8606 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lrnl0s.nssll5otgr0d7k5c \
	I0717 11:10:56.196357    8606 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c24be85cc8a3b21770f1d422f860354652361b15e4e8167266dbe73d5c2037be \
	I0717 11:10:56.196368    8606 kubeadm.go:310] 	--control-plane 
	I0717 11:10:56.196370    8606 kubeadm.go:310] 
	I0717 11:10:56.196414    8606 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 11:10:56.196416    8606 kubeadm.go:310] 
	I0717 11:10:56.196461    8606 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lrnl0s.nssll5otgr0d7k5c \
	I0717 11:10:56.196511    8606 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c24be85cc8a3b21770f1d422f860354652361b15e4e8167266dbe73d5c2037be 
	I0717 11:10:56.196619    8606 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 11:10:56.196710    8606 cni.go:84] Creating CNI manager for ""
	I0717 11:10:56.196720    8606 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:10:56.200575    8606 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 11:10:56.208649    8606 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 11:10:56.211570    8606 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 11:10:56.216329    8606 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 11:10:56.216401    8606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 11:10:56.216422    8606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-891000 minikube.k8s.io/updated_at=2024_07_17T11_10_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=running-upgrade-891000 minikube.k8s.io/primary=true
	I0717 11:10:56.258233    8606 kubeadm.go:1113] duration metric: took 41.860708ms to wait for elevateKubeSystemPrivileges
	I0717 11:10:56.258246    8606 ops.go:34] apiserver oom_adj: -16
	I0717 11:10:56.258250    8606 kubeadm.go:394] duration metric: took 4m12.339428125s to StartCluster
	I0717 11:10:56.258259    8606 settings.go:142] acquiring lock: {Name:mkb2460e5e181fb6243e4d9c07c303cabf02ebce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:10:56.258431    8606 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:10:56.258830    8606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/kubeconfig: {Name:mk593058234481727c8f9c6b6ce8d5b26e4d4302 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:10:56.259225    8606 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:10:56.259229    8606 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 11:10:56.259293    8606 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-891000"
	I0717 11:10:56.259296    8606 config.go:182] Loaded profile config "running-upgrade-891000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:10:56.259309    8606 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-891000"
	I0717 11:10:56.259319    8606 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-891000"
	W0717 11:10:56.259322    8606 addons.go:243] addon storage-provisioner should already be in state true
	I0717 11:10:56.259323    8606 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-891000"
	I0717 11:10:56.259335    8606 host.go:66] Checking if "running-upgrade-891000" exists ...
	I0717 11:10:56.260172    8606 kapi.go:59] client config for running-upgrade-891000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/running-upgrade-891000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/running-upgrade-891000/client.key", CAFile:"/Users/jenkins/minikube-integration/19282-6331/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1046a7730), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 11:10:56.260294    8606 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-891000"
	W0717 11:10:56.260299    8606 addons.go:243] addon default-storageclass should already be in state true
	I0717 11:10:56.260305    8606 host.go:66] Checking if "running-upgrade-891000" exists ...
	I0717 11:10:56.262713    8606 out.go:177] * Verifying Kubernetes components...
	I0717 11:10:56.263019    8606 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 11:10:56.266828    8606 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 11:10:56.266836    8606 sshutil.go:53] new ssh client: &{IP:localhost Port:51270 SSHKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/running-upgrade-891000/id_rsa Username:docker}
	I0717 11:10:56.270578    8606 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:10:56.274472    8606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:10:56.277620    8606 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 11:10:56.277626    8606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 11:10:56.277632    8606 sshutil.go:53] new ssh client: &{IP:localhost Port:51270 SSHKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/running-upgrade-891000/id_rsa Username:docker}
	I0717 11:10:56.362748    8606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 11:10:56.368168    8606 api_server.go:52] waiting for apiserver process to appear ...
	I0717 11:10:56.368210    8606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:10:56.372615    8606 api_server.go:72] duration metric: took 113.376708ms to wait for apiserver process to appear ...
	I0717 11:10:56.372623    8606 api_server.go:88] waiting for apiserver healthz status ...
	I0717 11:10:56.372630    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:56.387076    8606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 11:10:56.438975    8606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 11:11:01.374747    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:01.374768    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:06.374992    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:06.375012    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:11.375304    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:11.375341    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:16.375914    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:16.375951    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:21.376547    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:21.376589    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:26.377331    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:26.377365    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0717 11:11:26.738463    8606 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0717 11:11:26.742673    8606 out.go:177] * Enabled addons: storage-provisioner
	I0717 11:11:26.750605    8606 addons.go:510] duration metric: took 30.491328458s for enable addons: enabled=[storage-provisioner]
	I0717 11:11:31.378323    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:31.378370    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:36.379582    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:36.379599    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:41.381069    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:41.381098    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:46.382906    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:46.382952    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:51.385211    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:51.385236    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:56.387442    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:56.387546    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:11:56.400635    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:11:56.400707    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:11:56.412196    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:11:56.412269    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:11:56.426089    8606 logs.go:276] 2 containers: [cf751089cf65 cb43bc7ced85]
	I0717 11:11:56.426156    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:11:56.440315    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:11:56.440390    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:11:56.451584    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:11:56.451654    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:11:56.462005    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:11:56.462066    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:11:56.478414    8606 logs.go:276] 0 containers: []
	W0717 11:11:56.478425    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:11:56.478482    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:11:56.490525    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:11:56.490539    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:11:56.490544    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:11:56.523843    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:11:56.523852    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:11:56.528130    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:11:56.528139    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:11:56.542516    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:11:56.542525    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:11:56.555984    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:11:56.555995    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:11:56.567913    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:11:56.567923    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:11:56.586257    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:11:56.586270    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:11:56.598156    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:11:56.598167    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:11:56.634844    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:11:56.634859    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:11:56.646777    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:11:56.646786    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:11:56.658505    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:11:56.658516    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:11:56.681646    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:11:56.681656    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:11:56.693313    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:11:56.693325    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:11:59.218530    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:04.220870    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:04.221066    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:04.239894    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:12:04.239977    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:04.253571    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:12:04.253645    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:04.265334    8606 logs.go:276] 2 containers: [cf751089cf65 cb43bc7ced85]
	I0717 11:12:04.265410    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:04.276410    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:12:04.276475    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:04.287342    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:12:04.287409    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:04.298219    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:12:04.298291    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:04.308340    8606 logs.go:276] 0 containers: []
	W0717 11:12:04.308351    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:04.308411    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:04.326082    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:12:04.326100    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:12:04.326105    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:12:04.339320    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:04.339333    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:04.375971    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:12:04.375984    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:12:04.390414    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:12:04.390428    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:12:04.404771    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:12:04.404785    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:12:04.420122    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:12:04.420134    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:12:04.431651    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:12:04.431665    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:12:04.448763    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:04.448774    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:04.472151    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:12:04.472159    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:04.483604    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:04.483614    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:04.518106    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:04.518114    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:04.522218    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:12:04.522226    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:12:04.538195    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:12:04.538206    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:12:07.051379    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:12.053818    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:12.053987    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:12.070069    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:12:12.070157    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:12.088198    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:12:12.088275    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:12.099394    8606 logs.go:276] 2 containers: [cf751089cf65 cb43bc7ced85]
	I0717 11:12:12.099460    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:12.110258    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:12:12.110328    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:12.120768    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:12:12.120838    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:12.131297    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:12:12.131368    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:12.141588    8606 logs.go:276] 0 containers: []
	W0717 11:12:12.141601    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:12.141658    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:12.154241    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:12:12.154258    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:12.154263    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:12.188429    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:12.188440    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:12.222749    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:12:12.222761    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:12:12.236660    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:12:12.236677    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:12:12.263433    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:12:12.263448    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:12:12.276279    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:12:12.276288    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:12:12.288006    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:12:12.288019    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:12.300344    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:12.300357    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:12.305398    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:12:12.305407    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:12:12.319536    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:12:12.319547    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:12:12.331651    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:12:12.331662    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:12:12.346217    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:12:12.346231    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:12:12.367427    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:12.367436    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:14.894836    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:19.897320    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:19.897630    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:19.932527    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:12:19.932661    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:19.951936    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:12:19.952047    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:19.965902    8606 logs.go:276] 2 containers: [cf751089cf65 cb43bc7ced85]
	I0717 11:12:19.965995    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:19.976928    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:12:19.977035    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:19.989150    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:12:19.989229    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:20.001603    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:12:20.001685    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:20.012118    8606 logs.go:276] 0 containers: []
	W0717 11:12:20.012133    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:20.012194    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:20.022957    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:12:20.022970    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:20.022975    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:20.059240    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:20.059254    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:20.064241    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:12:20.064248    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:12:20.078694    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:12:20.078705    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:12:20.090614    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:12:20.090624    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:12:20.109510    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:12:20.109523    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:20.122131    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:20.122142    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:20.158519    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:12:20.158530    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:12:20.173663    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:12:20.173674    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:12:20.187713    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:12:20.187725    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:12:20.199996    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:12:20.200007    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:12:20.211424    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:12:20.211435    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:12:20.222700    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:20.222710    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:22.749289    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:27.751697    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:27.751925    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:27.772541    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:12:27.772637    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:27.786604    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:12:27.786684    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:27.797904    8606 logs.go:276] 2 containers: [cf751089cf65 cb43bc7ced85]
	I0717 11:12:27.797966    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:27.808377    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:12:27.808449    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:27.819763    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:12:27.819828    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:27.830644    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:12:27.830714    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:27.841109    8606 logs.go:276] 0 containers: []
	W0717 11:12:27.841123    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:27.841187    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:27.853668    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:12:27.853682    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:27.853688    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:27.893659    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:12:27.893673    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:12:27.907284    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:12:27.907300    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:12:27.918908    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:12:27.918920    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:12:27.930209    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:12:27.930222    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:12:27.942219    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:12:27.942230    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:12:27.959460    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:12:27.959471    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:12:27.971387    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:27.971397    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:27.995044    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:12:27.995054    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:28.007120    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:28.007131    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:28.040849    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:28.040862    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:28.045825    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:12:28.045832    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:12:28.059975    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:12:28.059985    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:12:30.577064    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:35.579450    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:35.579671    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:35.601125    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:12:35.601232    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:35.616058    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:12:35.616129    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:35.628419    8606 logs.go:276] 2 containers: [cf751089cf65 cb43bc7ced85]
	I0717 11:12:35.628481    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:35.638872    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:12:35.638938    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:35.649436    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:12:35.649500    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:35.659873    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:12:35.659941    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:35.670461    8606 logs.go:276] 0 containers: []
	W0717 11:12:35.670472    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:35.670528    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:35.681468    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:12:35.681482    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:12:35.681486    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:12:35.699373    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:35.699382    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:35.723915    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:35.723922    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:35.728385    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:12:35.728390    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:12:35.742769    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:12:35.742779    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:12:35.758152    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:12:35.758167    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:12:35.770111    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:12:35.770122    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:12:35.781163    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:12:35.781175    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:12:35.796264    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:12:35.796274    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:12:35.807844    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:12:35.807855    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:35.820148    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:35.820159    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:35.853801    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:35.853811    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:35.896918    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:12:35.896932    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:12:38.413868    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:43.414672    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:43.414859    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:43.441645    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:12:43.441726    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:43.456444    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:12:43.456520    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:43.468193    8606 logs.go:276] 2 containers: [cf751089cf65 cb43bc7ced85]
	I0717 11:12:43.468253    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:43.479013    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:12:43.479085    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:43.491103    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:12:43.491172    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:43.501590    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:12:43.501658    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:43.514536    8606 logs.go:276] 0 containers: []
	W0717 11:12:43.514546    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:43.514602    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:43.525483    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:12:43.525499    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:12:43.525504    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:12:43.544109    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:43.544119    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:43.568931    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:12:43.568940    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:12:43.583537    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:12:43.583547    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:12:43.601866    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:43.601877    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:43.636603    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:12:43.636617    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:12:43.658268    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:12:43.658278    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:12:43.669865    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:12:43.669875    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:12:43.684436    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:12:43.684448    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:12:43.696236    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:12:43.696247    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:12:43.708105    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:43.708116    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:43.743074    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:43.743094    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:43.748286    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:12:43.748296    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:46.261782    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:51.264082    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:51.264338    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:51.287288    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:12:51.287407    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:51.303406    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:12:51.303481    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:51.316562    8606 logs.go:276] 2 containers: [cf751089cf65 cb43bc7ced85]
	I0717 11:12:51.316631    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:51.327844    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:12:51.327917    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:51.338524    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:12:51.338601    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:51.348644    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:12:51.348706    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:51.359231    8606 logs.go:276] 0 containers: []
	W0717 11:12:51.359242    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:51.359297    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:51.375529    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:12:51.375544    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:12:51.375550    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:12:51.389513    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:12:51.389525    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:12:51.405874    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:12:51.405885    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:12:51.425953    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:51.425963    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:51.430308    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:12:51.430317    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:12:51.444597    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:12:51.444607    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:12:51.456170    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:12:51.456181    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:12:51.467836    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:12:51.467848    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:12:51.482346    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:12:51.482359    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:12:51.493987    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:51.493998    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:51.518907    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:12:51.518918    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:51.530365    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:51.530375    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:51.564475    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:51.564484    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:54.105252    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:59.107562    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:59.107753    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:59.120187    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:12:59.120267    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:59.131051    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:12:59.131124    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:59.141340    8606 logs.go:276] 2 containers: [cf751089cf65 cb43bc7ced85]
	I0717 11:12:59.141410    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:59.153497    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:12:59.153568    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:59.167637    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:12:59.167704    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:59.178135    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:12:59.178206    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:59.197695    8606 logs.go:276] 0 containers: []
	W0717 11:12:59.197707    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:59.197766    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:59.208082    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:12:59.208098    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:12:59.208103    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:12:59.220033    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:12:59.220043    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:12:59.237554    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:12:59.237565    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:12:59.249467    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:59.249477    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:59.272934    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:12:59.272943    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:59.284522    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:59.284533    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:59.318832    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:59.318841    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:59.323546    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:59.323555    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:59.358532    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:12:59.358545    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:12:59.372218    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:12:59.372228    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:12:59.386084    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:12:59.386094    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:12:59.397607    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:12:59.397617    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:12:59.416801    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:12:59.416812    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:13:01.930817    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:06.933282    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:06.933618    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:06.965089    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:13:06.965209    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:06.985536    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:13:06.985633    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:07.000603    8606 logs.go:276] 2 containers: [cf751089cf65 cb43bc7ced85]
	I0717 11:13:07.000680    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:07.012538    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:13:07.012610    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:07.023677    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:13:07.023745    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:07.034438    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:13:07.034512    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:07.045743    8606 logs.go:276] 0 containers: []
	W0717 11:13:07.045755    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:07.045816    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:07.056285    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:13:07.056302    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:07.056308    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:07.089544    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:13:07.089552    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:13:07.106418    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:13:07.106428    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:13:07.120385    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:13:07.120399    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:13:07.135053    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:13:07.135062    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:13:07.148125    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:13:07.148136    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:13:07.160081    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:13:07.160093    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:07.171648    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:07.171657    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:07.176553    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:07.176563    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:07.212703    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:13:07.212714    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:13:07.224362    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:13:07.224373    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:13:07.236756    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:13:07.236767    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:13:07.260026    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:07.260036    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:09.787834    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:14.790308    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:14.790547    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:14.814743    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:13:14.814853    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:14.831315    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:13:14.831400    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:14.844796    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:13:14.844873    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:14.856371    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:13:14.856438    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:14.866858    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:13:14.866934    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:14.877399    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:13:14.877474    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:14.887688    8606 logs.go:276] 0 containers: []
	W0717 11:13:14.887699    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:14.887759    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:14.900002    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:13:14.900020    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:14.900026    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:14.965261    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:13:14.965275    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:13:14.976429    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:13:14.976439    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:13:14.988621    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:13:14.988633    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:13:15.006176    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:15.006186    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:15.010837    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:13:15.010847    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:13:15.025079    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:13:15.025093    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:13:15.036274    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:15.036286    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:15.071648    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:13:15.071658    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:13:15.083082    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:13:15.083094    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:13:15.094950    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:15.094961    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:15.120433    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:13:15.120440    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:15.131716    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:13:15.131726    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:13:15.148661    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:13:15.148673    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:13:15.163437    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:13:15.163447    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:13:17.677343    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:22.680049    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:22.680410    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:22.714478    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:13:22.714601    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:22.733929    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:13:22.734029    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:22.748404    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:13:22.748488    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:22.760336    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:13:22.760399    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:22.771306    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:13:22.771376    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:22.781545    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:13:22.781614    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:22.791870    8606 logs.go:276] 0 containers: []
	W0717 11:13:22.791882    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:22.791945    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:22.802373    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:13:22.802389    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:13:22.802395    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:13:22.814132    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:22.814141    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:22.837741    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:22.837749    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:22.872480    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:13:22.872494    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:13:22.887169    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:13:22.887179    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:13:22.902254    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:13:22.902270    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:13:22.913951    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:13:22.913964    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:13:22.927823    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:13:22.927835    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:13:22.939259    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:13:22.939269    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:13:22.950826    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:22.950836    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:22.985049    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:13:22.985059    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:13:22.996976    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:13:22.996985    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:13:23.036520    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:13:23.036531    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:23.048562    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:23.048574    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:23.052970    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:13:23.052976    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:13:25.567807    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:30.570552    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:30.570950    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:30.607716    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:13:30.607840    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:30.628754    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:13:30.628871    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:30.648304    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:13:30.648375    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:30.660695    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:13:30.660768    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:30.671607    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:13:30.671665    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:30.682724    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:13:30.682788    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:30.693312    8606 logs.go:276] 0 containers: []
	W0717 11:13:30.693325    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:30.693382    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:30.703832    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:13:30.703851    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:30.703855    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:30.737296    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:30.737303    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:30.741497    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:13:30.741506    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:13:30.753249    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:13:30.753260    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:13:30.765422    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:13:30.765434    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:30.781419    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:30.781430    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:30.816242    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:13:30.816252    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:13:30.831298    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:13:30.831308    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:13:30.843662    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:13:30.843673    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:13:30.862163    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:13:30.862176    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:13:30.879910    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:13:30.879922    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:13:30.894801    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:13:30.894813    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:13:30.911537    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:13:30.911547    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:13:30.927350    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:13:30.927361    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:13:30.939977    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:30.939986    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:33.467153    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:38.469539    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:38.469684    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:38.488639    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:13:38.488721    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:38.502419    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:13:38.502490    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:38.514101    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:13:38.514163    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:38.524534    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:13:38.524605    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:38.535267    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:13:38.535339    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:38.545864    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:13:38.545929    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:38.555879    8606 logs.go:276] 0 containers: []
	W0717 11:13:38.555889    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:38.555942    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:38.566352    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:13:38.566371    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:13:38.566376    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:13:38.577597    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:38.577608    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:38.602569    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:13:38.602579    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:38.615456    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:13:38.615465    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:13:38.629292    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:13:38.629304    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:13:38.641187    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:13:38.641199    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:13:38.655870    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:13:38.655879    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:13:38.670660    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:38.670669    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:38.705308    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:13:38.705317    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:13:38.720596    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:13:38.720605    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:13:38.731720    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:13:38.731733    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:13:38.747572    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:13:38.747583    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:13:38.765734    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:38.765743    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:38.770385    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:38.770393    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:38.804408    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:13:38.804422    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:13:41.318423    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:46.320894    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:46.321042    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:46.334947    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:13:46.335022    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:46.346147    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:13:46.346219    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:46.356616    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:13:46.356683    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:46.367173    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:13:46.367248    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:46.378321    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:13:46.378407    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:46.389189    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:13:46.389259    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:46.400276    8606 logs.go:276] 0 containers: []
	W0717 11:13:46.400286    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:46.400346    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:46.410739    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:13:46.410760    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:13:46.410766    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:13:46.426312    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:13:46.426322    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:13:46.440802    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:13:46.440816    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:13:46.454663    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:13:46.454672    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:13:46.466819    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:13:46.466830    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:13:46.479021    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:13:46.479033    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:13:46.491212    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:13:46.491222    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:13:46.511194    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:46.511205    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:46.536791    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:46.536798    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:46.571784    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:13:46.571794    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:13:46.584284    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:13:46.584294    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:13:46.595986    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:13:46.595997    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:13:46.610657    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:13:46.610669    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:46.623007    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:46.623017    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:46.656352    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:46.656360    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:49.162653    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:54.163076    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:54.163199    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:54.177591    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:13:54.177675    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:54.189656    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:13:54.189718    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:54.200070    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:13:54.200137    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:54.210707    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:13:54.210776    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:54.221538    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:13:54.221606    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:54.231443    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:13:54.231509    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:54.241771    8606 logs.go:276] 0 containers: []
	W0717 11:13:54.241782    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:54.241839    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:54.252289    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:13:54.252306    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:13:54.252311    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:13:54.269145    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:13:54.269157    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:13:54.281037    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:13:54.281048    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:13:54.292410    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:13:54.292421    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:13:54.337125    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:54.337136    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:54.372706    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:54.372716    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:54.376841    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:13:54.376846    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:13:54.388830    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:13:54.388842    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:13:54.403570    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:13:54.403581    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:13:54.418101    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:13:54.418111    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:13:54.432416    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:54.432429    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:54.457614    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:13:54.457626    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:54.469884    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:54.469899    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:54.505581    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:13:54.505595    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:13:54.524524    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:13:54.524534    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:13:57.044092    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:02.047051    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:02.047415    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:14:02.082926    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:14:02.083062    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:14:02.101979    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:14:02.102080    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:14:02.116405    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:14:02.116488    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:14:02.128837    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:14:02.128900    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:14:02.149978    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:14:02.150048    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:14:02.160772    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:14:02.160837    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:14:02.172832    8606 logs.go:276] 0 containers: []
	W0717 11:14:02.172845    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:14:02.172905    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:14:02.184226    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:14:02.184242    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:14:02.184248    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:14:02.188655    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:14:02.188665    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:14:02.200131    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:14:02.200142    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:14:02.212208    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:14:02.212218    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:14:02.246865    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:14:02.246876    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:14:02.261209    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:14:02.261219    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:14:02.272819    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:14:02.272828    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:14:02.288254    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:14:02.288263    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:14:02.311522    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:14:02.311532    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:14:02.326000    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:14:02.326009    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:14:02.337423    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:14:02.337433    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:14:02.349700    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:14:02.349711    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:14:02.368036    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:14:02.368052    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:14:02.407803    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:14:02.407815    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:14:02.420242    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:14:02.420253    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:14:04.933758    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:09.936483    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:09.936758    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:14:09.965447    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:14:09.965581    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:14:09.983924    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:14:09.984019    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:14:09.998287    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:14:09.998362    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:14:10.014554    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:14:10.014625    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:14:10.025590    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:14:10.025658    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:14:10.039836    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:14:10.039905    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:14:10.050741    8606 logs.go:276] 0 containers: []
	W0717 11:14:10.050754    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:14:10.050805    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:14:10.061985    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:14:10.062004    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:14:10.062015    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:14:10.067080    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:14:10.067090    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:14:10.081145    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:14:10.081157    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:14:10.097517    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:14:10.097527    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:14:10.121542    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:14:10.121554    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:14:10.137021    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:14:10.137036    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:14:10.152414    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:14:10.152427    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:14:10.166736    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:14:10.166749    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:14:10.201135    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:14:10.201149    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:14:10.216701    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:14:10.216711    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:14:10.228014    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:14:10.228028    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:14:10.242584    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:14:10.242598    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:14:10.264126    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:14:10.264136    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:14:10.275928    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:14:10.275938    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:14:10.287602    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:14:10.287612    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:14:12.829979    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:17.832804    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:17.833093    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:14:17.863101    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:14:17.863234    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:14:17.882573    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:14:17.882652    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:14:17.896991    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:14:17.897059    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:14:17.908651    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:14:17.908714    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:14:17.919428    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:14:17.919485    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:14:17.929736    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:14:17.929801    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:14:17.940420    8606 logs.go:276] 0 containers: []
	W0717 11:14:17.940431    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:14:17.940487    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:14:17.950876    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:14:17.950898    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:14:17.950904    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:14:17.963063    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:14:17.963077    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:14:17.974916    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:14:17.974927    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:14:17.999228    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:14:17.999236    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:14:18.034117    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:14:18.034128    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:14:18.046532    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:14:18.046543    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:14:18.083095    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:14:18.083106    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:14:18.098783    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:14:18.098794    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:14:18.112637    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:14:18.112647    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:14:18.127689    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:14:18.127698    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:14:18.132384    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:14:18.132390    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:14:18.145450    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:14:18.145459    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:14:18.157273    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:14:18.157283    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:14:18.168919    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:14:18.168933    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:14:18.186624    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:14:18.186637    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:14:20.700428    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:25.702848    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:25.703076    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:14:25.728061    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:14:25.728167    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:14:25.746337    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:14:25.746417    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:14:25.760520    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:14:25.760589    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:14:25.776630    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:14:25.776690    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:14:25.786789    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:14:25.786845    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:14:25.797369    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:14:25.797439    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:14:25.807462    8606 logs.go:276] 0 containers: []
	W0717 11:14:25.807478    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:14:25.807540    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:14:25.817975    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:14:25.817992    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:14:25.817997    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:14:25.829126    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:14:25.829136    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:14:25.864146    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:14:25.864156    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:14:25.875481    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:14:25.875495    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:14:25.887370    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:14:25.887384    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:14:25.898840    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:14:25.898852    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:14:25.913576    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:14:25.913584    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:14:25.918234    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:14:25.918250    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:14:25.932594    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:14:25.932606    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:14:25.946851    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:14:25.946864    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:14:25.960659    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:14:25.960672    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:14:25.973418    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:14:25.973428    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:14:26.008100    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:14:26.008109    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:14:26.019881    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:14:26.019891    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:14:26.038518    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:14:26.038529    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:14:28.564642    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:33.567053    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:33.567159    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:14:33.578517    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:14:33.578588    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:14:33.601288    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:14:33.601365    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:14:33.613013    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:14:33.613094    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:14:33.624824    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:14:33.624894    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:14:33.636043    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:14:33.636115    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:14:33.647494    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:14:33.647564    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:14:33.658646    8606 logs.go:276] 0 containers: []
	W0717 11:14:33.658657    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:14:33.658717    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:14:33.670553    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:14:33.670572    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:14:33.670577    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:14:33.683694    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:14:33.683707    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:14:33.696220    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:14:33.696231    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:14:33.708647    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:14:33.708660    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:14:33.724695    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:14:33.724706    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:14:33.735945    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:14:33.735955    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:14:33.773929    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:14:33.773945    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:14:33.788798    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:14:33.788812    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:14:33.807779    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:14:33.807795    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:14:33.812345    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:14:33.812354    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:14:33.825393    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:14:33.825410    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:14:33.842761    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:14:33.842773    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:14:33.867946    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:14:33.867959    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:14:33.903956    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:14:33.903978    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:14:33.917008    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:14:33.917026    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:14:36.449908    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:41.452374    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:41.452813    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:14:41.484783    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:14:41.484910    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:14:41.504670    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:14:41.504753    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:14:41.519310    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:14:41.519387    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:14:41.531547    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:14:41.531617    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:14:41.542354    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:14:41.542417    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:14:41.553536    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:14:41.553603    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:14:41.564185    8606 logs.go:276] 0 containers: []
	W0717 11:14:41.564197    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:14:41.564253    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:14:41.574490    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:14:41.574507    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:14:41.574513    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:14:41.593982    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:14:41.593995    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:14:41.612551    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:14:41.612563    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:14:41.626802    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:14:41.626812    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:14:41.640380    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:14:41.640390    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:14:41.652813    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:14:41.652826    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:14:41.665180    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:14:41.665191    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:14:41.701024    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:14:41.701039    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:14:41.712619    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:14:41.712630    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:14:41.724890    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:14:41.724901    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:14:41.730097    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:14:41.730107    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:14:41.742672    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:14:41.742682    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:14:41.755570    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:14:41.755581    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:14:41.778221    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:14:41.778228    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:14:41.811281    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:14:41.811291    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:14:44.327934    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:49.330540    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:49.330644    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:14:49.342103    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:14:49.342175    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:14:49.353734    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:14:49.353805    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:14:49.364347    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:14:49.364420    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:14:49.375147    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:14:49.375221    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:14:49.385859    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:14:49.385927    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:14:49.396316    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:14:49.396386    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:14:49.406338    8606 logs.go:276] 0 containers: []
	W0717 11:14:49.406353    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:14:49.406412    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:14:49.421532    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:14:49.421549    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:14:49.421554    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:14:49.433239    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:14:49.433250    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:14:49.446530    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:14:49.446541    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:14:49.451534    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:14:49.451541    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:14:49.488441    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:14:49.488451    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:14:49.508985    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:14:49.508999    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:14:49.534536    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:14:49.534549    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:14:49.570634    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:14:49.570646    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:14:49.584351    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:14:49.584363    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:14:49.596040    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:14:49.596050    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:14:49.619222    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:14:49.619234    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:14:49.630851    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:14:49.630862    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:14:49.645381    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:14:49.645392    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:14:49.657617    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:14:49.657628    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:14:49.671200    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:14:49.671211    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:14:52.188706    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:57.191007    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:57.195555    8606 out.go:177] 
	W0717 11:14:57.199529    8606 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0717 11:14:57.199537    8606 out.go:239] * 
	* 
	W0717 11:14:57.200269    8606 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:14:57.211366    8606 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-891000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-17 11:14:57.324884 -0700 PDT m=+1279.969191751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-891000 -n running-upgrade-891000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-891000 -n running-upgrade-891000: exit status 2 (15.729100375s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-891000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-016000          | force-systemd-flag-016000 | jenkins | v1.33.1 | 17 Jul 24 11:05 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-794000              | force-systemd-env-794000  | jenkins | v1.33.1 | 17 Jul 24 11:05 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-794000           | force-systemd-env-794000  | jenkins | v1.33.1 | 17 Jul 24 11:05 PDT | 17 Jul 24 11:05 PDT |
	| start   | -p docker-flags-816000                | docker-flags-816000       | jenkins | v1.33.1 | 17 Jul 24 11:05 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-016000             | force-systemd-flag-016000 | jenkins | v1.33.1 | 17 Jul 24 11:05 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-016000          | force-systemd-flag-016000 | jenkins | v1.33.1 | 17 Jul 24 11:05 PDT | 17 Jul 24 11:05 PDT |
	| start   | -p cert-expiration-696000             | cert-expiration-696000    | jenkins | v1.33.1 | 17 Jul 24 11:05 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-816000 ssh               | docker-flags-816000       | jenkins | v1.33.1 | 17 Jul 24 11:05 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-816000 ssh               | docker-flags-816000       | jenkins | v1.33.1 | 17 Jul 24 11:05 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-816000                | docker-flags-816000       | jenkins | v1.33.1 | 17 Jul 24 11:05 PDT | 17 Jul 24 11:05 PDT |
	| start   | -p cert-options-448000                | cert-options-448000       | jenkins | v1.33.1 | 17 Jul 24 11:05 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-448000 ssh               | cert-options-448000       | jenkins | v1.33.1 | 17 Jul 24 11:05 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-448000 -- sudo        | cert-options-448000       | jenkins | v1.33.1 | 17 Jul 24 11:05 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-448000                | cert-options-448000       | jenkins | v1.33.1 | 17 Jul 24 11:05 PDT | 17 Jul 24 11:05 PDT |
	| start   | -p running-upgrade-891000             | minikube                  | jenkins | v1.26.0 | 17 Jul 24 11:05 PDT | 17 Jul 24 11:06 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-891000             | running-upgrade-891000    | jenkins | v1.33.1 | 17 Jul 24 11:06 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-696000             | cert-expiration-696000    | jenkins | v1.33.1 | 17 Jul 24 11:08 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-696000             | cert-expiration-696000    | jenkins | v1.33.1 | 17 Jul 24 11:08 PDT | 17 Jul 24 11:08 PDT |
	| start   | -p kubernetes-upgrade-067000          | kubernetes-upgrade-067000 | jenkins | v1.33.1 | 17 Jul 24 11:08 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-067000          | kubernetes-upgrade-067000 | jenkins | v1.33.1 | 17 Jul 24 11:08 PDT | 17 Jul 24 11:08 PDT |
	| start   | -p kubernetes-upgrade-067000          | kubernetes-upgrade-067000 | jenkins | v1.33.1 | 17 Jul 24 11:08 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-067000          | kubernetes-upgrade-067000 | jenkins | v1.33.1 | 17 Jul 24 11:08 PDT | 17 Jul 24 11:08 PDT |
	| start   | -p stopped-upgrade-058000             | minikube                  | jenkins | v1.26.0 | 17 Jul 24 11:09 PDT | 17 Jul 24 11:09 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-058000 stop           | minikube                  | jenkins | v1.26.0 | 17 Jul 24 11:09 PDT | 17 Jul 24 11:09 PDT |
	| start   | -p stopped-upgrade-058000             | stopped-upgrade-058000    | jenkins | v1.33.1 | 17 Jul 24 11:09 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 11:09:57
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 11:09:57.195923    8746 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:09:57.196279    8746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:09:57.196286    8746 out.go:304] Setting ErrFile to fd 2...
	I0717 11:09:57.196392    8746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:09:57.196660    8746 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:09:57.198140    8746 out.go:298] Setting JSON to false
	I0717 11:09:57.218226    8746 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5969,"bootTime":1721233828,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 11:09:57.218305    8746 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:09:57.222699    8746 out.go:177] * [stopped-upgrade-058000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:09:57.230682    8746 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 11:09:57.230778    8746 notify.go:220] Checking for updates...
	I0717 11:09:57.237642    8746 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:09:57.239093    8746 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:09:57.242654    8746 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:09:57.245625    8746 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 11:09:57.248649    8746 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:09:57.251876    8746 config.go:182] Loaded profile config "stopped-upgrade-058000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:09:57.255634    8746 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 11:09:57.258635    8746 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:09:57.265605    8746 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 11:09:57.272675    8746 start.go:297] selected driver: qemu2
	I0717 11:09:57.272681    8746 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-058000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51504 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-058000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0717 11:09:57.272742    8746 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:09:57.275373    8746 cni.go:84] Creating CNI manager for ""
	I0717 11:09:57.275438    8746 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:09:57.275464    8746 start.go:340] cluster config:
	{Name:stopped-upgrade-058000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51504 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-058000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0717 11:09:57.275521    8746 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:09:57.282617    8746 out.go:177] * Starting "stopped-upgrade-058000" primary control-plane node in "stopped-upgrade-058000" cluster
	I0717 11:09:57.285624    8746 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0717 11:09:57.285642    8746 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0717 11:09:57.285666    8746 cache.go:56] Caching tarball of preloaded images
	I0717 11:09:57.285746    8746 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:09:57.285752    8746 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0717 11:09:57.285808    8746 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/config.json ...
	I0717 11:09:57.286302    8746 start.go:360] acquireMachinesLock for stopped-upgrade-058000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:09:57.286338    8746 start.go:364] duration metric: took 29.584µs to acquireMachinesLock for "stopped-upgrade-058000"
	I0717 11:09:57.286346    8746 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:09:57.286351    8746 fix.go:54] fixHost starting: 
	I0717 11:09:57.286467    8746 fix.go:112] recreateIfNeeded on stopped-upgrade-058000: state=Stopped err=<nil>
	W0717 11:09:57.286475    8746 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:09:57.290654    8746 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-058000" ...
	I0717 11:09:56.240901    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:57.297637    8746 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:09:57.297710    8746 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/stopped-upgrade-058000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/stopped-upgrade-058000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/stopped-upgrade-058000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51470-:22,hostfwd=tcp::51471-:2376,hostname=stopped-upgrade-058000 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/stopped-upgrade-058000/disk.qcow2
	I0717 11:09:57.343742    8746 main.go:141] libmachine: STDOUT: 
	I0717 11:09:57.343780    8746 main.go:141] libmachine: STDERR: 
	I0717 11:09:57.343786    8746 main.go:141] libmachine: Waiting for VM to start (ssh -p 51470 docker@127.0.0.1)...
	I0717 11:10:01.242619    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:01.242761    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:01.254568    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:10:01.254654    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:01.265399    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:10:01.265467    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:01.275984    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:10:01.276041    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:01.286242    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:10:01.286301    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:01.296491    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:10:01.296556    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:01.307409    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:10:01.307467    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:01.317310    8606 logs.go:276] 0 containers: []
	W0717 11:10:01.317324    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:01.317379    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:01.327613    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:10:01.327629    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:10:01.327634    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:10:01.339058    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:10:01.339071    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:10:01.353685    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:10:01.353697    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:10:01.365028    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:01.365041    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:01.388657    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:01.388667    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:01.393190    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:10:01.393197    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:10:01.408025    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:10:01.408035    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:10:01.422119    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:01.422129    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:01.456233    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:10:01.456243    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:10:01.467763    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:10:01.467774    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:10:01.483233    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:10:01.483243    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:10:01.495380    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:10:01.495391    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:10:01.512430    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:10:01.512441    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:01.523937    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:01.523951    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:01.561774    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:10:01.561785    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:10:01.572948    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:10:01.572959    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:10:04.089007    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:09.091297    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:09.091416    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:09.102631    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:10:09.102703    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:09.113376    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:10:09.113448    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:09.125702    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:10:09.125770    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:09.136937    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:10:09.137011    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:09.148070    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:10:09.148140    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:09.159647    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:10:09.159720    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:09.170986    8606 logs.go:276] 0 containers: []
	W0717 11:10:09.170998    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:09.171056    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:09.185829    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:10:09.185847    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:10:09.185853    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:10:09.198885    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:10:09.198901    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:10:09.214200    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:10:09.214213    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:10:09.231499    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:10:09.231511    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:10:09.247233    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:09.247243    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:09.270779    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:09.270795    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:09.309918    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:09.309931    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:09.347613    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:10:09.347626    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:10:09.363573    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:10:09.363588    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:10:09.376473    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:10:09.376484    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:10:09.389176    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:10:09.389188    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:10:09.400914    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:10:09.400926    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:09.412478    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:09.412489    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:09.416953    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:10:09.416962    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:10:09.432084    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:10:09.432094    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:10:09.443981    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:10:09.443993    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:10:11.963614    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:16.966211    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:16.966671    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:17.004116    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:10:17.004250    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:17.024584    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:10:17.024702    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:17.039658    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:10:17.039731    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:17.052296    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:10:17.052371    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:17.065639    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:10:17.065707    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:17.076187    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:10:17.076261    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:17.087029    8606 logs.go:276] 0 containers: []
	W0717 11:10:17.087039    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:17.087100    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:17.097484    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:10:17.097501    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:17.097506    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:17.135223    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:10:17.135230    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:10:17.149056    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:10:17.149067    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:10:17.164062    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:10:17.164074    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:10:17.176143    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:10:17.176155    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:10:17.197809    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:17.197819    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:17.202034    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:17.202040    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:17.236516    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:10:17.236528    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:10:17.248369    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:10:17.248379    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:10:17.263525    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:10:17.263539    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:10:17.282072    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:17.282083    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:17.308620    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:10:17.308631    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:10:17.323368    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:10:17.323381    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:10:17.337087    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:10:17.337100    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:10:17.356749    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:10:17.356762    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:10:17.368072    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:10:17.368083    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:17.726392    8746 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/config.json ...
	I0717 11:10:17.727131    8746 machine.go:94] provisionDockerMachine start ...
	I0717 11:10:17.727348    8746 main.go:141] libmachine: Using SSH client type: native
	I0717 11:10:17.727753    8746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ed29b0] 0x104ed5210 <nil>  [] 0s} localhost 51470 <nil> <nil>}
	I0717 11:10:17.727768    8746 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 11:10:17.804170    8746 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 11:10:17.804191    8746 buildroot.go:166] provisioning hostname "stopped-upgrade-058000"
	I0717 11:10:17.804252    8746 main.go:141] libmachine: Using SSH client type: native
	I0717 11:10:17.804393    8746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ed29b0] 0x104ed5210 <nil>  [] 0s} localhost 51470 <nil> <nil>}
	I0717 11:10:17.804400    8746 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-058000 && echo "stopped-upgrade-058000" | sudo tee /etc/hostname
	I0717 11:10:17.868152    8746 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-058000
	
	I0717 11:10:17.868210    8746 main.go:141] libmachine: Using SSH client type: native
	I0717 11:10:17.868336    8746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ed29b0] 0x104ed5210 <nil>  [] 0s} localhost 51470 <nil> <nil>}
	I0717 11:10:17.868343    8746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-058000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-058000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-058000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 11:10:17.929096    8746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 11:10:17.929111    8746 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19282-6331/.minikube CaCertPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19282-6331/.minikube}
	I0717 11:10:17.929126    8746 buildroot.go:174] setting up certificates
	I0717 11:10:17.929131    8746 provision.go:84] configureAuth start
	I0717 11:10:17.929135    8746 provision.go:143] copyHostCerts
	I0717 11:10:17.929201    8746 exec_runner.go:144] found /Users/jenkins/minikube-integration/19282-6331/.minikube/ca.pem, removing ...
	I0717 11:10:17.929208    8746 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19282-6331/.minikube/ca.pem
	I0717 11:10:17.929309    8746 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19282-6331/.minikube/ca.pem (1078 bytes)
	I0717 11:10:17.929493    8746 exec_runner.go:144] found /Users/jenkins/minikube-integration/19282-6331/.minikube/cert.pem, removing ...
	I0717 11:10:17.929497    8746 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19282-6331/.minikube/cert.pem
	I0717 11:10:17.929538    8746 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19282-6331/.minikube/cert.pem (1123 bytes)
	I0717 11:10:17.929645    8746 exec_runner.go:144] found /Users/jenkins/minikube-integration/19282-6331/.minikube/key.pem, removing ...
	I0717 11:10:17.929648    8746 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19282-6331/.minikube/key.pem
	I0717 11:10:17.929688    8746 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19282-6331/.minikube/key.pem (1679 bytes)
	I0717 11:10:17.929780    8746 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-058000 san=[127.0.0.1 localhost minikube stopped-upgrade-058000]
	I0717 11:10:17.973148    8746 provision.go:177] copyRemoteCerts
	I0717 11:10:17.973174    8746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 11:10:17.973180    8746 sshutil.go:53] new ssh client: &{IP:localhost Port:51470 SSHKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/stopped-upgrade-058000/id_rsa Username:docker}
	I0717 11:10:18.005649    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 11:10:18.012435    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 11:10:18.019098    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 11:10:18.026758    8746 provision.go:87] duration metric: took 97.616875ms to configureAuth
	I0717 11:10:18.026768    8746 buildroot.go:189] setting minikube options for container-runtime
	I0717 11:10:18.026884    8746 config.go:182] Loaded profile config "stopped-upgrade-058000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:10:18.026915    8746 main.go:141] libmachine: Using SSH client type: native
	I0717 11:10:18.026997    8746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ed29b0] 0x104ed5210 <nil>  [] 0s} localhost 51470 <nil> <nil>}
	I0717 11:10:18.027002    8746 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 11:10:18.091936    8746 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 11:10:18.091948    8746 buildroot.go:70] root file system type: tmpfs
	I0717 11:10:18.092004    8746 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 11:10:18.092054    8746 main.go:141] libmachine: Using SSH client type: native
	I0717 11:10:18.092178    8746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ed29b0] 0x104ed5210 <nil>  [] 0s} localhost 51470 <nil> <nil>}
	I0717 11:10:18.092215    8746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 11:10:18.156794    8746 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 11:10:18.156856    8746 main.go:141] libmachine: Using SSH client type: native
	I0717 11:10:18.156989    8746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ed29b0] 0x104ed5210 <nil>  [] 0s} localhost 51470 <nil> <nil>}
	I0717 11:10:18.156997    8746 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 11:10:18.501310    8746 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 11:10:18.501324    8746 machine.go:97] duration metric: took 774.1785ms to provisionDockerMachine
	I0717 11:10:18.501330    8746 start.go:293] postStartSetup for "stopped-upgrade-058000" (driver="qemu2")
	I0717 11:10:18.501336    8746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 11:10:18.501402    8746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 11:10:18.501410    8746 sshutil.go:53] new ssh client: &{IP:localhost Port:51470 SSHKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/stopped-upgrade-058000/id_rsa Username:docker}
	I0717 11:10:18.532275    8746 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 11:10:18.533555    8746 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 11:10:18.533562    8746 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19282-6331/.minikube/addons for local assets ...
	I0717 11:10:18.533638    8746 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19282-6331/.minikube/files for local assets ...
	I0717 11:10:18.533730    8746 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19282-6331/.minikube/files/etc/ssl/certs/68202.pem -> 68202.pem in /etc/ssl/certs
	I0717 11:10:18.533823    8746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 11:10:18.536230    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/files/etc/ssl/certs/68202.pem --> /etc/ssl/certs/68202.pem (1708 bytes)
	I0717 11:10:18.543019    8746 start.go:296] duration metric: took 41.684458ms for postStartSetup
	I0717 11:10:18.543036    8746 fix.go:56] duration metric: took 21.256652583s for fixHost
	I0717 11:10:18.543065    8746 main.go:141] libmachine: Using SSH client type: native
	I0717 11:10:18.543170    8746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ed29b0] 0x104ed5210 <nil>  [] 0s} localhost 51470 <nil> <nil>}
	I0717 11:10:18.543179    8746 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 11:10:18.603458    8746 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721239818.457815796
	
	I0717 11:10:18.603479    8746 fix.go:216] guest clock: 1721239818.457815796
	I0717 11:10:18.603483    8746 fix.go:229] Guest: 2024-07-17 11:10:18.457815796 -0700 PDT Remote: 2024-07-17 11:10:18.543038 -0700 PDT m=+21.377990501 (delta=-85.222204ms)
	I0717 11:10:18.603494    8746 fix.go:200] guest clock delta is within tolerance: -85.222204ms
	I0717 11:10:18.603498    8746 start.go:83] releasing machines lock for "stopped-upgrade-058000", held for 21.317122541s
	I0717 11:10:18.603562    8746 ssh_runner.go:195] Run: cat /version.json
	I0717 11:10:18.603565    8746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 11:10:18.603570    8746 sshutil.go:53] new ssh client: &{IP:localhost Port:51470 SSHKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/stopped-upgrade-058000/id_rsa Username:docker}
	I0717 11:10:18.603580    8746 sshutil.go:53] new ssh client: &{IP:localhost Port:51470 SSHKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/stopped-upgrade-058000/id_rsa Username:docker}
	W0717 11:10:18.604156    8746 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51470: connect: connection refused
	I0717 11:10:18.604178    8746 retry.go:31] will retry after 352.608115ms: dial tcp [::1]:51470: connect: connection refused
	W0717 11:10:19.003177    8746 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0717 11:10:19.003304    8746 ssh_runner.go:195] Run: systemctl --version
	I0717 11:10:19.007093    8746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 11:10:19.010408    8746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 11:10:19.010458    8746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0717 11:10:19.015406    8746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0717 11:10:19.023020    8746 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 11:10:19.023033    8746 start.go:495] detecting cgroup driver to use...
	I0717 11:10:19.023151    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 11:10:19.032969    8746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0717 11:10:19.036797    8746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 11:10:19.040242    8746 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 11:10:19.040273    8746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 11:10:19.043805    8746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 11:10:19.047255    8746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 11:10:19.050643    8746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 11:10:19.054078    8746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 11:10:19.057043    8746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 11:10:19.059716    8746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 11:10:19.063006    8746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 11:10:19.066396    8746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 11:10:19.069077    8746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 11:10:19.071622    8746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:10:19.155966    8746 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 11:10:19.162638    8746 start.go:495] detecting cgroup driver to use...
	I0717 11:10:19.162719    8746 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 11:10:19.167866    8746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 11:10:19.172868    8746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 11:10:19.180975    8746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 11:10:19.185500    8746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 11:10:19.190122    8746 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 11:10:19.235781    8746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 11:10:19.240527    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 11:10:19.245931    8746 ssh_runner.go:195] Run: which cri-dockerd
	I0717 11:10:19.247094    8746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 11:10:19.250042    8746 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 11:10:19.254887    8746 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 11:10:19.341274    8746 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 11:10:19.426886    8746 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 11:10:19.426957    8746 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 11:10:19.432411    8746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:10:19.516886    8746 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 11:10:20.673719    8746 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.156807333s)
	I0717 11:10:20.673782    8746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 11:10:20.683637    8746 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0717 11:10:20.690230    8746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 11:10:20.694584    8746 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 11:10:20.774594    8746 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 11:10:20.846293    8746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:10:20.923101    8746 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 11:10:20.929228    8746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 11:10:20.933456    8746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:10:21.015513    8746 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 11:10:21.053526    8746 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 11:10:21.053597    8746 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 11:10:21.056682    8746 start.go:563] Will wait 60s for crictl version
	I0717 11:10:21.056740    8746 ssh_runner.go:195] Run: which crictl
	I0717 11:10:21.058293    8746 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 11:10:21.072847    8746 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0717 11:10:21.072914    8746 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 11:10:21.088710    8746 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 11:10:21.113162    8746 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0717 11:10:21.113275    8746 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0717 11:10:21.114524    8746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 11:10:21.118493    8746 kubeadm.go:883] updating cluster {Name:stopped-upgrade-058000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51504 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-058000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0717 11:10:21.118542    8746 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0717 11:10:21.118582    8746 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 11:10:21.129047    8746 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 11:10:21.129055    8746 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0717 11:10:21.129099    8746 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 11:10:21.132006    8746 ssh_runner.go:195] Run: which lz4
	I0717 11:10:21.133418    8746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 11:10:21.134648    8746 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 11:10:21.134657    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0717 11:10:22.095290    8746 docker.go:649] duration metric: took 961.899041ms to copy over tarball
	I0717 11:10:22.095348    8746 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 11:10:19.882052    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:23.263978    8746 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.1685995s)
	I0717 11:10:23.264000    8746 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 11:10:23.279865    8746 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 11:10:23.283099    8746 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0717 11:10:23.288266    8746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:10:23.368546    8746 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 11:10:24.992105    8746 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.623531917s)
	I0717 11:10:24.992232    8746 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 11:10:25.007117    8746 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 11:10:25.007126    8746 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0717 11:10:25.007131    8746 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 11:10:25.011238    8746 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:10:25.013050    8746 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:10:25.014951    8746 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:10:25.015038    8746 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:10:25.017513    8746 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:10:25.017570    8746 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0717 11:10:25.019573    8746 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:10:25.019795    8746 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:10:25.020977    8746 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0717 11:10:25.021337    8746 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0717 11:10:25.022483    8746 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:10:25.022594    8746 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:10:25.023974    8746 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:10:25.024068    8746 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0717 11:10:25.025203    8746 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:10:25.025836    8746 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:10:25.409243    8746 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:10:25.411541    8746 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:10:25.424703    8746 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0717 11:10:25.424728    8746 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:10:25.424783    8746 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:10:25.429548    8746 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0717 11:10:25.429569    8746 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:10:25.429618    8746 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:10:25.438683    8746 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0717 11:10:25.443783    8746 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0717 11:10:25.447705    8746 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:10:25.455313    8746 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0717 11:10:25.458223    8746 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0717 11:10:25.458240    8746 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:10:25.458275    8746 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:10:25.465667    8746 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0717 11:10:25.468469    8746 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0717 11:10:25.468489    8746 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0717 11:10:25.468517    8746 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0717 11:10:25.468526    8746 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	W0717 11:10:25.474716    8746 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0717 11:10:25.474851    8746 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:10:25.478745    8746 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0717 11:10:25.478768    8746 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0717 11:10:25.478812    8746 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0717 11:10:25.483830    8746 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0717 11:10:25.483955    8746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0717 11:10:25.498821    8746 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0717 11:10:25.498835    8746 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0717 11:10:25.498841    8746 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:10:25.498875    8746 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0717 11:10:25.498884    8746 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:10:25.498887    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0717 11:10:25.498935    8746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0717 11:10:25.512136    8746 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:10:25.533096    8746 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0717 11:10:25.533118    8746 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0717 11:10:25.533127    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0717 11:10:25.533222    8746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0717 11:10:25.549742    8746 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0717 11:10:25.549761    8746 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:10:25.549811    8746 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:10:25.557344    8746 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0717 11:10:25.557373    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0717 11:10:25.571098    8746 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0717 11:10:25.571119    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0717 11:10:25.585888    8746 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0717 11:10:25.645957    8746 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0717 11:10:25.664372    8746 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0717 11:10:25.664387    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0717 11:10:25.667141    8746 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0717 11:10:25.667244    8746 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:10:25.771495    8746 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0717 11:10:25.771566    8746 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0717 11:10:25.771591    8746 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:10:25.771652    8746 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:10:25.808960    8746 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 11:10:25.809080    8746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 11:10:25.819841    8746 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0717 11:10:25.819873    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0717 11:10:25.835171    8746 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0717 11:10:25.835184    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0717 11:10:25.982142    8746 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0717 11:10:25.982164    8746 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 11:10:25.982170    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0717 11:10:26.214202    8746 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 11:10:26.214244    8746 cache_images.go:92] duration metric: took 1.20710525s to LoadCachedImages
	W0717 11:10:26.214283    8746 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0717 11:10:26.214292    8746 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0717 11:10:26.214339    8746 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-058000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-058000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 11:10:26.214398    8746 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 11:10:26.227839    8746 cni.go:84] Creating CNI manager for ""
	I0717 11:10:26.227852    8746 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:10:26.227857    8746 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 11:10:26.227866    8746 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-058000 NodeName:stopped-upgrade-058000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 11:10:26.227935    8746 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-058000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 11:10:26.227987    8746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0717 11:10:26.231387    8746 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 11:10:26.231416    8746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 11:10:26.234628    8746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0717 11:10:26.239655    8746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 11:10:26.244923    8746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0717 11:10:26.250182    8746 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0717 11:10:26.251534    8746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 11:10:26.255302    8746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:10:26.339041    8746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 11:10:26.350010    8746 certs.go:68] Setting up /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000 for IP: 10.0.2.15
	I0717 11:10:26.350018    8746 certs.go:194] generating shared ca certs ...
	I0717 11:10:26.350027    8746 certs.go:226] acquiring lock for ca certs: {Name:mkc544d9d9a3de35c1f6cee821ec7cd5d08f6f6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:10:26.350202    8746 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/ca.key
	I0717 11:10:26.350261    8746 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/proxy-client-ca.key
	I0717 11:10:26.350269    8746 certs.go:256] generating profile certs ...
	I0717 11:10:26.350343    8746 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/client.key
	I0717 11:10:26.350361    8746 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/apiserver.key.8922329e
	I0717 11:10:26.350372    8746 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/apiserver.crt.8922329e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0717 11:10:26.401776    8746 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/apiserver.crt.8922329e ...
	I0717 11:10:26.401790    8746 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/apiserver.crt.8922329e: {Name:mk82b84f3bd3e95cf746ad95dd6bad65dcc92ffc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:10:26.402931    8746 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/apiserver.key.8922329e ...
	I0717 11:10:26.402938    8746 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/apiserver.key.8922329e: {Name:mkbee49545955be66796292d3778fb9483e5628e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:10:26.403104    8746 certs.go:381] copying /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/apiserver.crt.8922329e -> /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/apiserver.crt
	I0717 11:10:26.403247    8746 certs.go:385] copying /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/apiserver.key.8922329e -> /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/apiserver.key
	I0717 11:10:26.403405    8746 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/proxy-client.key
	I0717 11:10:26.403538    8746 certs.go:484] found cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/6820.pem (1338 bytes)
	W0717 11:10:26.403567    8746 certs.go:480] ignoring /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/6820_empty.pem, impossibly tiny 0 bytes
	I0717 11:10:26.403574    8746 certs.go:484] found cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 11:10:26.403601    8746 certs.go:484] found cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem (1078 bytes)
	I0717 11:10:26.403626    8746 certs.go:484] found cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem (1123 bytes)
	I0717 11:10:26.403650    8746 certs.go:484] found cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/key.pem (1679 bytes)
	I0717 11:10:26.403907    8746 certs.go:484] found cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/files/etc/ssl/certs/68202.pem (1708 bytes)
	I0717 11:10:26.404359    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 11:10:26.411216    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 11:10:26.418239    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 11:10:26.425135    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 11:10:26.431818    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 11:10:26.439131    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 11:10:26.446883    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 11:10:26.454605    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 11:10:26.462251    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/files/etc/ssl/certs/68202.pem --> /usr/share/ca-certificates/68202.pem (1708 bytes)
	I0717 11:10:26.469435    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 11:10:26.475915    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/6820.pem --> /usr/share/ca-certificates/6820.pem (1338 bytes)
	I0717 11:10:26.482775    8746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 11:10:26.487985    8746 ssh_runner.go:195] Run: openssl version
	I0717 11:10:26.489845    8746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/68202.pem && ln -fs /usr/share/ca-certificates/68202.pem /etc/ssl/certs/68202.pem"
	I0717 11:10:26.492679    8746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/68202.pem
	I0717 11:10:26.493961    8746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:54 /usr/share/ca-certificates/68202.pem
	I0717 11:10:26.493986    8746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/68202.pem
	I0717 11:10:26.495754    8746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/68202.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 11:10:26.499133    8746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 11:10:26.502253    8746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 11:10:26.503622    8746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0717 11:10:26.503638    8746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 11:10:26.505297    8746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 11:10:26.508057    8746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6820.pem && ln -fs /usr/share/ca-certificates/6820.pem /etc/ssl/certs/6820.pem"
	I0717 11:10:26.511212    8746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6820.pem
	I0717 11:10:26.512737    8746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:54 /usr/share/ca-certificates/6820.pem
	I0717 11:10:26.512754    8746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6820.pem
	I0717 11:10:26.514473    8746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6820.pem /etc/ssl/certs/51391683.0"
	I0717 11:10:26.517478    8746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 11:10:26.519018    8746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 11:10:26.521294    8746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 11:10:26.523040    8746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 11:10:26.525086    8746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 11:10:26.527109    8746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 11:10:26.529052    8746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 11:10:26.530795    8746 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-058000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51504 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-058000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0717 11:10:26.530862    8746 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 11:10:26.540567    8746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 11:10:26.543641    8746 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 11:10:26.543649    8746 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 11:10:26.543674    8746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 11:10:26.547163    8746 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 11:10:26.547492    8746 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-058000" does not appear in /Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:10:26.547581    8746 kubeconfig.go:62] /Users/jenkins/minikube-integration/19282-6331/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-058000" cluster setting kubeconfig missing "stopped-upgrade-058000" context setting]
	I0717 11:10:26.547773    8746 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/kubeconfig: {Name:mk593058234481727c8f9c6b6ce8d5b26e4d4302 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:10:26.548209    8746 kapi.go:59] client config for stopped-upgrade-058000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/client.key", CAFile:"/Users/jenkins/minikube-integration/19282-6331/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106267730), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 11:10:26.548540    8746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 11:10:26.551239    8746 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-058000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0717 11:10:26.551247    8746 kubeadm.go:1160] stopping kube-system containers ...
	I0717 11:10:26.551288    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 11:10:26.562543    8746 docker.go:483] Stopping containers: [e372bb421024 5ac69b9301b1 05d92b386885 45e9faca056f 4d18bd71336b 4229c14fdcfb f73468515120 5778510fae0a 6d85b1985a2d]
	I0717 11:10:26.562612    8746 ssh_runner.go:195] Run: docker stop e372bb421024 5ac69b9301b1 05d92b386885 45e9faca056f 4d18bd71336b 4229c14fdcfb f73468515120 5778510fae0a 6d85b1985a2d
	I0717 11:10:26.572942    8746 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 11:10:26.578405    8746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 11:10:26.581293    8746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 11:10:26.581299    8746 kubeadm.go:157] found existing configuration files:
	
	I0717 11:10:26.581322    8746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/admin.conf
	I0717 11:10:26.583779    8746 kubeadm.go:163] "https://control-plane.minikube.internal:51504" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 11:10:26.583802    8746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 11:10:26.586772    8746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/kubelet.conf
	I0717 11:10:26.589736    8746 kubeadm.go:163] "https://control-plane.minikube.internal:51504" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 11:10:26.589766    8746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 11:10:26.592247    8746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/controller-manager.conf
	I0717 11:10:26.595043    8746 kubeadm.go:163] "https://control-plane.minikube.internal:51504" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 11:10:26.595064    8746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 11:10:26.598054    8746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/scheduler.conf
	I0717 11:10:26.600639    8746 kubeadm.go:163] "https://control-plane.minikube.internal:51504" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 11:10:26.600662    8746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 11:10:26.603249    8746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 11:10:26.606414    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:10:26.628676    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:10:26.966256    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:10:27.104522    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:10:27.127105    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:10:27.150870    8746 api_server.go:52] waiting for apiserver process to appear ...
	I0717 11:10:27.150940    8746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:10:24.884316    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:24.884405    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:24.900313    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:10:24.900389    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:24.912020    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:10:24.912098    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:24.923343    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:10:24.923410    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:24.934953    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:10:24.935060    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:24.946000    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:10:24.946081    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:24.957728    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:10:24.957804    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:24.968697    8606 logs.go:276] 0 containers: []
	W0717 11:10:24.968709    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:24.968775    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:24.984693    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:10:24.984711    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:24.984718    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:25.027512    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:10:25.027524    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:10:25.042737    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:10:25.042749    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:10:25.059565    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:10:25.059576    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:10:25.074943    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:10:25.074957    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:10:25.093461    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:10:25.093483    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:10:25.106323    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:10:25.106335    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:25.119608    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:25.119621    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:25.124633    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:10:25.124648    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:10:25.137842    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:10:25.137853    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:10:25.151071    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:10:25.151086    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:10:25.166574    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:10:25.166588    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:10:25.179545    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:25.179556    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:25.203399    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:25.203413    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:25.242843    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:10:25.242863    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:10:25.260824    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:10:25.260837    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:10:27.778306    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:27.653205    8746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:10:28.152910    8746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:10:28.157168    8746 api_server.go:72] duration metric: took 1.006297875s to wait for apiserver process to appear ...
	I0717 11:10:28.157179    8746 api_server.go:88] waiting for apiserver healthz status ...
	I0717 11:10:28.157188    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:32.780550    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:32.780828    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:32.814397    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:10:32.814526    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:32.831043    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:10:32.831127    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:32.844485    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:10:32.844561    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:32.857317    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:10:32.857393    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:32.868089    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:10:32.868153    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:32.878274    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:10:32.878345    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:32.887970    8606 logs.go:276] 0 containers: []
	W0717 11:10:32.887985    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:32.888043    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:32.900350    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:10:32.900366    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:32.900371    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:32.904888    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:10:32.904895    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:10:32.916670    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:10:32.916680    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:10:32.932668    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:10:32.932680    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:10:32.944571    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:32.944584    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:32.982620    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:10:32.982628    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:10:32.994074    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:32.994086    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:33.017137    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:10:33.017147    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:33.028662    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:10:33.028673    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:10:33.040716    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:10:33.040728    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:10:33.052598    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:10:33.052609    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:10:33.067201    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:10:33.067212    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:10:33.081135    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:10:33.081147    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:10:33.096278    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:10:33.096290    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:10:33.117594    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:10:33.117604    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:10:33.135446    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:33.135457    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:33.158512    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:33.158533    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:35.676238    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:38.158797    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:38.158827    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:40.678523    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:40.678654    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:40.690163    8606 logs.go:276] 2 containers: [ca6980809a34 f006daa626f0]
	I0717 11:10:40.690239    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:40.701251    8606 logs.go:276] 2 containers: [51c9f58df3d6 45eb3a19d32c]
	I0717 11:10:40.701324    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:40.711497    8606 logs.go:276] 1 containers: [c3ada6e1bb09]
	I0717 11:10:40.711567    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:40.723293    8606 logs.go:276] 2 containers: [dafbf13751ab fd7f8a5743a1]
	I0717 11:10:40.723368    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:40.733675    8606 logs.go:276] 1 containers: [9735272a2b55]
	I0717 11:10:40.733747    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:40.744046    8606 logs.go:276] 2 containers: [4d5d59daf13f 532e66b427f5]
	I0717 11:10:40.744116    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:40.755013    8606 logs.go:276] 0 containers: []
	W0717 11:10:40.755026    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:40.755092    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:40.765724    8606 logs.go:276] 1 containers: [923682a36a72]
	I0717 11:10:40.765742    8606 logs.go:123] Gathering logs for kube-controller-manager [4d5d59daf13f] ...
	I0717 11:10:40.765747    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5d59daf13f"
	I0717 11:10:40.783213    8606 logs.go:123] Gathering logs for kube-controller-manager [532e66b427f5] ...
	I0717 11:10:40.783224    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 532e66b427f5"
	I0717 11:10:40.794649    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:40.794664    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:40.817552    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:40.817567    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:40.856723    8606 logs.go:123] Gathering logs for coredns [c3ada6e1bb09] ...
	I0717 11:10:40.856740    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3ada6e1bb09"
	I0717 11:10:40.867839    8606 logs.go:123] Gathering logs for kube-scheduler [dafbf13751ab] ...
	I0717 11:10:40.867852    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dafbf13751ab"
	I0717 11:10:40.881102    8606 logs.go:123] Gathering logs for etcd [45eb3a19d32c] ...
	I0717 11:10:40.881113    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45eb3a19d32c"
	I0717 11:10:40.895203    8606 logs.go:123] Gathering logs for kube-scheduler [fd7f8a5743a1] ...
	I0717 11:10:40.895220    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd7f8a5743a1"
	I0717 11:10:40.911433    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:40.911443    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:40.916259    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:40.916266    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:40.951111    8606 logs.go:123] Gathering logs for etcd [51c9f58df3d6] ...
	I0717 11:10:40.951126    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51c9f58df3d6"
	I0717 11:10:40.969008    8606 logs.go:123] Gathering logs for kube-apiserver [ca6980809a34] ...
	I0717 11:10:40.969024    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6980809a34"
	I0717 11:10:40.982400    8606 logs.go:123] Gathering logs for storage-provisioner [923682a36a72] ...
	I0717 11:10:40.982416    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 923682a36a72"
	I0717 11:10:40.993489    8606 logs.go:123] Gathering logs for kube-apiserver [f006daa626f0] ...
	I0717 11:10:40.993503    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f006daa626f0"
	I0717 11:10:41.005060    8606 logs.go:123] Gathering logs for kube-proxy [9735272a2b55] ...
	I0717 11:10:41.005074    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9735272a2b55"
	I0717 11:10:41.016411    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:10:41.016422    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:43.530087    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:43.159294    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:43.159334    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:48.532406    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:48.532501    8606 kubeadm.go:597] duration metric: took 4m4.583568875s to restartPrimaryControlPlane
	W0717 11:10:48.532560    8606 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 11:10:48.532587    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0717 11:10:49.525594    8606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 11:10:49.530688    8606 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 11:10:49.533523    8606 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 11:10:49.536264    8606 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 11:10:49.536271    8606 kubeadm.go:157] found existing configuration files:
	
	I0717 11:10:49.536295    8606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/admin.conf
	I0717 11:10:49.539249    8606 kubeadm.go:163] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 11:10:49.539275    8606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 11:10:49.542341    8606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/kubelet.conf
	I0717 11:10:49.544919    8606 kubeadm.go:163] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 11:10:49.544940    8606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 11:10:49.547959    8606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/controller-manager.conf
	I0717 11:10:49.550968    8606 kubeadm.go:163] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 11:10:49.550995    8606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 11:10:49.553627    8606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/scheduler.conf
	I0717 11:10:49.555994    8606 kubeadm.go:163] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 11:10:49.556015    8606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 11:10:49.558978    8606 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 11:10:49.576405    8606 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0717 11:10:49.576447    8606 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 11:10:49.626579    8606 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 11:10:49.626644    8606 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 11:10:49.626704    8606 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 11:10:49.678639    8606 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 11:10:49.682671    8606 out.go:204]   - Generating certificates and keys ...
	I0717 11:10:49.682706    8606 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 11:10:49.682741    8606 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 11:10:49.682781    8606 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 11:10:49.682808    8606 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 11:10:49.682846    8606 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 11:10:49.682888    8606 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 11:10:49.682922    8606 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 11:10:49.682953    8606 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 11:10:49.682996    8606 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 11:10:49.683051    8606 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 11:10:49.683071    8606 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 11:10:49.683103    8606 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 11:10:49.718557    8606 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 11:10:49.961642    8606 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 11:10:50.004421    8606 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 11:10:50.119042    8606 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 11:10:50.153902    8606 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 11:10:50.154240    8606 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 11:10:50.154265    8606 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 11:10:50.243842    8606 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 11:10:48.159973    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:48.160047    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:50.246774    8606 out.go:204]   - Booting up control plane ...
	I0717 11:10:50.246815    8606 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 11:10:50.246846    8606 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 11:10:50.250223    8606 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 11:10:50.250271    8606 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 11:10:50.250443    8606 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 11:10:54.752391    8606 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501764 seconds
	I0717 11:10:54.752450    8606 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 11:10:54.756518    8606 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 11:10:55.282128    8606 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 11:10:55.282443    8606 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-891000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 11:10:55.791369    8606 kubeadm.go:310] [bootstrap-token] Using token: lrnl0s.nssll5otgr0d7k5c
	I0717 11:10:55.795111    8606 out.go:204]   - Configuring RBAC rules ...
	I0717 11:10:55.795187    8606 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 11:10:55.795251    8606 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 11:10:55.802801    8606 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 11:10:55.803705    8606 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 11:10:55.804591    8606 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 11:10:55.805518    8606 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 11:10:55.808644    8606 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 11:10:55.962782    8606 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 11:10:56.195305    8606 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 11:10:56.195773    8606 kubeadm.go:310] 
	I0717 11:10:56.195814    8606 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 11:10:56.195819    8606 kubeadm.go:310] 
	I0717 11:10:56.195868    8606 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 11:10:56.195873    8606 kubeadm.go:310] 
	I0717 11:10:56.195886    8606 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 11:10:56.195922    8606 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 11:10:56.195960    8606 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 11:10:56.195965    8606 kubeadm.go:310] 
	I0717 11:10:56.195991    8606 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 11:10:56.195996    8606 kubeadm.go:310] 
	I0717 11:10:56.196025    8606 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 11:10:56.196028    8606 kubeadm.go:310] 
	I0717 11:10:56.196050    8606 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 11:10:56.196094    8606 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 11:10:56.196127    8606 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 11:10:56.196132    8606 kubeadm.go:310] 
	I0717 11:10:56.196183    8606 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 11:10:56.196238    8606 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 11:10:56.196241    8606 kubeadm.go:310] 
	I0717 11:10:56.196289    8606 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lrnl0s.nssll5otgr0d7k5c \
	I0717 11:10:56.196357    8606 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c24be85cc8a3b21770f1d422f860354652361b15e4e8167266dbe73d5c2037be \
	I0717 11:10:56.196368    8606 kubeadm.go:310] 	--control-plane 
	I0717 11:10:56.196370    8606 kubeadm.go:310] 
	I0717 11:10:56.196414    8606 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 11:10:56.196416    8606 kubeadm.go:310] 
	I0717 11:10:56.196461    8606 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lrnl0s.nssll5otgr0d7k5c \
	I0717 11:10:56.196511    8606 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c24be85cc8a3b21770f1d422f860354652361b15e4e8167266dbe73d5c2037be 
	I0717 11:10:56.196619    8606 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 11:10:56.196710    8606 cni.go:84] Creating CNI manager for ""
	I0717 11:10:56.196720    8606 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:10:56.200575    8606 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 11:10:56.208649    8606 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 11:10:56.211570    8606 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 11:10:56.216329    8606 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 11:10:56.216401    8606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 11:10:56.216422    8606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-891000 minikube.k8s.io/updated_at=2024_07_17T11_10_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=running-upgrade-891000 minikube.k8s.io/primary=true
	I0717 11:10:56.258233    8606 kubeadm.go:1113] duration metric: took 41.860708ms to wait for elevateKubeSystemPrivileges
	I0717 11:10:56.258246    8606 ops.go:34] apiserver oom_adj: -16
	I0717 11:10:56.258250    8606 kubeadm.go:394] duration metric: took 4m12.339428125s to StartCluster
	I0717 11:10:56.258259    8606 settings.go:142] acquiring lock: {Name:mkb2460e5e181fb6243e4d9c07c303cabf02ebce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:10:56.258431    8606 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:10:56.258830    8606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/kubeconfig: {Name:mk593058234481727c8f9c6b6ce8d5b26e4d4302 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:10:56.259225    8606 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:10:56.259229    8606 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 11:10:56.259293    8606 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-891000"
	I0717 11:10:56.259296    8606 config.go:182] Loaded profile config "running-upgrade-891000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:10:56.259309    8606 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-891000"
	I0717 11:10:56.259319    8606 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-891000"
	W0717 11:10:56.259322    8606 addons.go:243] addon storage-provisioner should already be in state true
	I0717 11:10:56.259323    8606 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-891000"
	I0717 11:10:56.259335    8606 host.go:66] Checking if "running-upgrade-891000" exists ...
	I0717 11:10:56.260172    8606 kapi.go:59] client config for running-upgrade-891000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/running-upgrade-891000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/running-upgrade-891000/client.key", CAFile:"/Users/jenkins/minikube-integration/19282-6331/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1046a7730), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 11:10:56.260294    8606 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-891000"
	W0717 11:10:56.260299    8606 addons.go:243] addon default-storageclass should already be in state true
	I0717 11:10:56.260305    8606 host.go:66] Checking if "running-upgrade-891000" exists ...
	I0717 11:10:56.262713    8606 out.go:177] * Verifying Kubernetes components...
	I0717 11:10:56.263019    8606 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 11:10:56.266828    8606 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 11:10:56.266836    8606 sshutil.go:53] new ssh client: &{IP:localhost Port:51270 SSHKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/running-upgrade-891000/id_rsa Username:docker}
	I0717 11:10:56.270578    8606 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:10:53.160520    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:53.160557    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:56.274472    8606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:10:56.277620    8606 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 11:10:56.277626    8606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 11:10:56.277632    8606 sshutil.go:53] new ssh client: &{IP:localhost Port:51270 SSHKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/running-upgrade-891000/id_rsa Username:docker}
	I0717 11:10:56.362748    8606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 11:10:56.368168    8606 api_server.go:52] waiting for apiserver process to appear ...
	I0717 11:10:56.368210    8606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:10:56.372615    8606 api_server.go:72] duration metric: took 113.376708ms to wait for apiserver process to appear ...
	I0717 11:10:56.372623    8606 api_server.go:88] waiting for apiserver healthz status ...
	I0717 11:10:56.372630    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:56.387076    8606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 11:10:56.438975    8606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 11:10:58.161216    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:58.161237    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:01.374747    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:01.374768    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:03.161962    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:03.162009    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:06.374992    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:06.375012    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:08.163017    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:08.163060    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:11.375304    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:11.375341    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:13.164256    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:13.164293    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:16.375914    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:16.375951    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:18.164885    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:18.164933    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:21.376547    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:21.376589    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:26.377331    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:26.377365    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0717 11:11:26.738463    8606 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0717 11:11:26.742673    8606 out.go:177] * Enabled addons: storage-provisioner
	I0717 11:11:23.166735    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:23.166773    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:26.750605    8606 addons.go:510] duration metric: took 30.491328458s for enable addons: enabled=[storage-provisioner]
	I0717 11:11:28.167249    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:28.167476    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:11:28.187489    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:11:28.187585    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:11:28.202495    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:11:28.202578    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:11:28.214231    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:11:28.214335    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:11:28.226654    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:11:28.226725    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:11:28.240308    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:11:28.240373    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:11:28.250694    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:11:28.250772    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:11:28.260790    8746 logs.go:276] 0 containers: []
	W0717 11:11:28.260803    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:11:28.260864    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:11:28.271383    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:11:28.271401    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:11:28.271406    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:11:28.283782    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:11:28.283796    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:11:28.288529    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:11:28.288537    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:11:28.300306    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:11:28.300318    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:11:28.319849    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:11:28.319859    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:11:28.345791    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:11:28.345803    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:11:28.361037    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:11:28.361050    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:11:28.377889    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:11:28.377899    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:11:28.417218    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:11:28.417232    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:11:28.431608    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:11:28.431621    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:11:28.457402    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:11:28.457416    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:11:28.472287    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:11:28.472299    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:11:28.490165    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:11:28.490175    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:11:28.501274    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:11:28.501285    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:11:28.609923    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:11:28.609938    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:11:28.624341    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:11:28.624352    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:11:31.142711    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:31.378323    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:31.378370    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:36.144971    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:36.145127    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:11:36.157184    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:11:36.157263    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:11:36.168390    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:11:36.168469    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:11:36.179647    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:11:36.179716    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:11:36.194113    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:11:36.194180    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:11:36.206084    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:11:36.206155    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:11:36.216950    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:11:36.217017    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:11:36.227603    8746 logs.go:276] 0 containers: []
	W0717 11:11:36.227614    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:11:36.227672    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:11:36.238154    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:11:36.238171    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:11:36.238177    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:11:36.252983    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:11:36.252994    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:11:36.282175    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:11:36.282186    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:11:36.297206    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:11:36.297217    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:11:36.309184    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:11:36.309194    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:11:36.320654    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:11:36.320665    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:11:36.346335    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:11:36.346342    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:11:36.385179    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:11:36.385186    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:11:36.403661    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:11:36.403672    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:11:36.408437    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:11:36.408445    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:11:36.422247    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:11:36.422257    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:11:36.433437    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:11:36.433447    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:11:36.456068    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:11:36.456078    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:11:36.474150    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:11:36.474162    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:11:36.513300    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:11:36.513311    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:11:36.525126    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:11:36.525135    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:11:36.379582    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:36.379599    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:39.038773    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:41.381069    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:41.381098    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:44.041188    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:44.041467    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:11:44.071421    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:11:44.071564    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:11:44.091456    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:11:44.091537    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:11:44.104709    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:11:44.104778    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:11:44.115540    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:11:44.115610    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:11:44.126175    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:11:44.126245    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:11:44.140523    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:11:44.140598    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:11:44.154533    8746 logs.go:276] 0 containers: []
	W0717 11:11:44.154545    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:11:44.154607    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:11:44.164977    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:11:44.164995    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:11:44.165001    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:11:44.179343    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:11:44.179358    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:11:44.197705    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:11:44.197719    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:11:44.209512    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:11:44.209522    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:11:44.223901    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:11:44.223915    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:11:44.235318    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:11:44.235332    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:11:44.247236    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:11:44.247247    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:11:44.262076    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:11:44.262093    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:11:44.278986    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:11:44.278998    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:11:44.303269    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:11:44.303286    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:11:44.316854    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:11:44.316866    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:11:44.354195    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:11:44.354209    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:11:44.358357    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:11:44.358363    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:11:44.395303    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:11:44.395313    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:11:44.420824    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:11:44.420835    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:11:44.432506    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:11:44.432517    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:11:46.959627    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:46.382906    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:46.382952    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:51.962065    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:51.962232    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:11:51.978842    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:11:51.978934    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:11:51.993156    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:11:51.993229    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:11:52.004292    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:11:52.004358    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:11:52.017347    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:11:52.017418    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:11:52.027977    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:11:52.028045    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:11:52.040723    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:11:52.040800    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:11:52.051341    8746 logs.go:276] 0 containers: []
	W0717 11:11:52.051353    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:11:52.051418    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:11:52.062484    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:11:52.062500    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:11:52.062507    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:11:52.080179    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:11:52.080194    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:11:52.092165    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:11:52.092178    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:11:52.131210    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:11:52.131220    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:11:52.145283    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:11:52.145295    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:11:52.159510    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:11:52.159520    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:11:52.185883    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:11:52.185896    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:11:51.385211    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:51.385236    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:52.203434    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:11:52.203444    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:11:52.216167    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:11:52.216179    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:11:52.241903    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:11:52.241914    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:11:52.254172    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:11:52.254189    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:11:52.258576    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:11:52.258582    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:11:52.294538    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:11:52.294551    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:11:52.305763    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:11:52.305774    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:11:52.324024    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:11:52.324036    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:11:52.338732    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:11:52.338746    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:11:54.853336    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:56.387442    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:56.387546    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:11:56.400635    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:11:56.400707    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:11:56.412196    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:11:56.412269    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:11:56.426089    8606 logs.go:276] 2 containers: [cf751089cf65 cb43bc7ced85]
	I0717 11:11:56.426156    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:11:56.440315    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:11:56.440390    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:11:56.451584    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:11:56.451654    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:11:56.462005    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:11:56.462066    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:11:56.478414    8606 logs.go:276] 0 containers: []
	W0717 11:11:56.478425    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:11:56.478482    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:11:56.490525    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:11:56.490539    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:11:56.490544    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:11:56.523843    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:11:56.523852    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:11:56.528130    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:11:56.528139    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:11:56.542516    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:11:56.542525    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:11:56.555984    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:11:56.555995    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:11:56.567913    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:11:56.567923    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:11:56.586257    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:11:56.586270    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:11:56.598156    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:11:56.598167    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:11:56.634844    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:11:56.634859    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:11:56.646777    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:11:56.646786    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:11:56.658505    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:11:56.658516    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:11:56.681646    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:11:56.681656    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:11:56.693313    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:11:56.693325    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:11:59.856055    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:59.856242    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:11:59.878094    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:11:59.878202    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:11:59.893430    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:11:59.893503    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:11:59.905735    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:11:59.905808    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:11:59.915955    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:11:59.916028    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:11:59.926590    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:11:59.926661    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:11:59.940190    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:11:59.940259    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:11:59.955625    8746 logs.go:276] 0 containers: []
	W0717 11:11:59.955636    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:11:59.955692    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:11:59.966514    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:11:59.966530    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:11:59.966537    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:11:59.971211    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:11:59.971217    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:00.006955    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:12:00.006967    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:12:00.024780    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:12:00.024792    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:12:00.036448    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:00.036460    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:00.075540    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:12:00.075551    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:12:00.093717    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:12:00.093728    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:00.107173    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:12:00.107185    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:12:00.121814    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:12:00.121824    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:12:00.146366    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:12:00.146378    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:12:00.158392    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:12:00.158406    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:12:00.170282    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:00.170293    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:00.195786    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:12:00.195795    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:12:00.210001    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:12:00.210014    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:12:00.224308    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:12:00.224322    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:12:00.236063    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:12:00.236074    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:11:59.218530    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:02.750465    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:04.220870    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:04.221066    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:04.239894    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:12:04.239977    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:04.253571    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:12:04.253645    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:04.265334    8606 logs.go:276] 2 containers: [cf751089cf65 cb43bc7ced85]
	I0717 11:12:04.265410    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:04.276410    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:12:04.276475    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:04.287342    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:12:04.287409    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:04.298219    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:12:04.298291    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:04.308340    8606 logs.go:276] 0 containers: []
	W0717 11:12:04.308351    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:04.308411    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:04.326082    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:12:04.326100    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:12:04.326105    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:12:04.339320    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:04.339333    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:04.375971    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:12:04.375984    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:12:04.390414    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:12:04.390428    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:12:04.404771    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:12:04.404785    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:12:04.420122    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:12:04.420134    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:12:04.431651    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:12:04.431665    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:12:04.448763    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:04.448774    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:04.472151    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:12:04.472159    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:04.483604    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:04.483614    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:04.518106    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:04.518114    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:04.522218    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:12:04.522226    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:12:04.538195    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:12:04.538206    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:12:07.051379    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:07.753048    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:07.753283    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:07.783817    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:12:07.783942    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:07.801888    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:12:07.801985    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:07.815998    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:12:07.816075    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:07.827123    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:12:07.827195    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:07.837483    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:12:07.837552    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:07.847900    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:12:07.847966    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:07.859228    8746 logs.go:276] 0 containers: []
	W0717 11:12:07.859240    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:07.859303    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:07.876530    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:12:07.876547    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:07.876552    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:07.901748    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:07.901759    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:07.940449    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:12:07.940458    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:12:07.952529    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:12:07.952540    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:12:07.964252    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:12:07.964262    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:07.975712    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:07.975722    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:07.979772    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:07.979779    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:08.017349    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:12:08.017362    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:12:08.029413    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:12:08.029424    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:12:08.040718    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:12:08.040730    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:12:08.057798    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:12:08.057810    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:12:08.071822    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:12:08.071834    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:12:08.096906    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:12:08.096916    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:12:08.111663    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:12:08.111675    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:12:08.126187    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:12:08.126198    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:12:08.147443    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:12:08.147453    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:12:10.660134    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:12.053818    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:12.053987    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:12.070069    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:12:12.070157    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:12.088198    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:12:12.088275    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:12.099394    8606 logs.go:276] 2 containers: [cf751089cf65 cb43bc7ced85]
	I0717 11:12:12.099460    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:12.110258    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:12:12.110328    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:12.120768    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:12:12.120838    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:12.131297    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:12:12.131368    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:12.141588    8606 logs.go:276] 0 containers: []
	W0717 11:12:12.141601    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:12.141658    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:12.154241    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:12:12.154258    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:12.154263    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:12.188429    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:12.188440    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:12.222749    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:12:12.222761    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:12:12.236660    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:12:12.236677    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:12:12.263433    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:12:12.263448    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:12:12.276279    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:12:12.276288    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:12:12.288006    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:12:12.288019    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:12.300344    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:12.300357    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:12.305398    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:12:12.305407    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:12:12.319536    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:12:12.319547    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:12:12.331651    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:12:12.331662    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:12:12.346217    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:12:12.346231    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:12:12.367427    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:12.367436    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:15.660753    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:15.660956    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:15.684832    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:12:15.684934    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:15.700140    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:12:15.700238    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:15.718464    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:12:15.718531    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:15.728956    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:12:15.729023    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:15.739419    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:12:15.739490    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:15.758801    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:12:15.758868    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:15.768944    8746 logs.go:276] 0 containers: []
	W0717 11:12:15.768956    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:15.769016    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:15.779497    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:12:15.779515    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:12:15.779520    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:12:15.790758    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:12:15.790769    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:12:15.806409    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:12:15.806419    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:12:15.818274    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:12:15.818285    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:12:15.835732    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:15.835742    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:15.861043    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:12:15.861052    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:15.872956    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:12:15.872967    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:12:15.897490    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:15.897504    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:15.934133    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:12:15.934145    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:12:15.946863    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:15.946874    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:15.951262    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:12:15.951268    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:12:15.965343    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:12:15.965353    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:12:15.979228    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:12:15.979242    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:12:15.997781    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:12:15.997795    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:12:16.010132    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:12:16.010145    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:12:16.028098    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:16.028109    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:14.894836    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:18.568126    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:19.897320    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:19.897630    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:19.932527    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:12:19.932661    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:19.951936    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:12:19.952047    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:19.965902    8606 logs.go:276] 2 containers: [cf751089cf65 cb43bc7ced85]
	I0717 11:12:19.965995    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:19.976928    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:12:19.977035    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:19.989150    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:12:19.989229    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:20.001603    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:12:20.001685    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:20.012118    8606 logs.go:276] 0 containers: []
	W0717 11:12:20.012133    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:20.012194    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:20.022957    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:12:20.022970    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:20.022975    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:20.059240    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:20.059254    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:20.064241    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:12:20.064248    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:12:20.078694    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:12:20.078705    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:12:20.090614    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:12:20.090624    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:12:20.109510    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:12:20.109523    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:20.122131    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:20.122142    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:20.158519    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:12:20.158530    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:12:20.173663    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:12:20.173674    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:12:20.187713    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:12:20.187725    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:12:20.199996    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:12:20.200007    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:12:20.211424    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:12:20.211435    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:12:20.222700    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:20.222710    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:22.749289    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:23.570741    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:23.570859    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:23.583042    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:12:23.583106    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:23.594686    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:12:23.594772    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:23.605993    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:12:23.606061    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:23.617078    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:12:23.617154    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:23.628039    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:12:23.628104    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:23.638767    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:12:23.638829    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:23.648811    8746 logs.go:276] 0 containers: []
	W0717 11:12:23.648822    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:23.648880    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:23.659312    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:12:23.659333    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:23.659340    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:23.663691    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:12:23.663698    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:12:23.678641    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:12:23.678650    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:12:23.693657    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:23.693670    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:23.718081    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:12:23.718091    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:12:23.732214    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:12:23.732229    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:12:23.743254    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:12:23.743266    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:12:23.761328    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:12:23.761341    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:12:23.773317    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:12:23.773332    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:12:23.785708    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:23.785718    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:23.824611    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:12:23.824623    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:12:23.847618    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:12:23.847630    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:12:23.860152    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:23.860162    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:23.898836    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:12:23.898844    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:12:23.923162    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:12:23.923174    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:12:23.940634    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:12:23.940646    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:26.454695    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:27.751697    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:27.751925    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:27.772541    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:12:27.772637    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:27.786604    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:12:27.786684    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:27.797904    8606 logs.go:276] 2 containers: [cf751089cf65 cb43bc7ced85]
	I0717 11:12:27.797966    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:27.808377    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:12:27.808449    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:27.819763    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:12:27.819828    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:27.830644    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:12:27.830714    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:27.841109    8606 logs.go:276] 0 containers: []
	W0717 11:12:27.841123    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:27.841187    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:27.853668    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:12:27.853682    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:27.853688    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:27.893659    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:12:27.893673    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:12:27.907284    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:12:27.907300    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:12:27.918908    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:12:27.918920    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:12:27.930209    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:12:27.930222    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:12:27.942219    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:12:27.942230    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:12:27.959460    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:12:27.959471    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:12:27.971387    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:27.971397    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:27.995044    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:12:27.995054    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:28.007120    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:28.007131    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:28.040849    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:28.040862    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:28.045825    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:12:28.045832    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:12:28.059975    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:12:28.059985    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:12:31.457013    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:31.457224    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:31.477361    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:12:31.477450    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:31.492926    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:12:31.493003    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:31.504845    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:12:31.504923    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:31.515770    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:12:31.515840    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:31.526349    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:12:31.526415    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:31.536786    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:12:31.536858    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:31.546524    8746 logs.go:276] 0 containers: []
	W0717 11:12:31.546534    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:31.546586    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:31.563524    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:12:31.563541    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:12:31.563549    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:12:31.589208    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:12:31.589219    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:12:31.600699    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:12:31.600711    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:12:31.619729    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:12:31.619739    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:31.631388    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:31.631399    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:31.635471    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:12:31.635478    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:12:31.649622    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:12:31.649679    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:12:31.667210    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:12:31.667226    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:12:31.680681    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:12:31.680690    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:12:31.692439    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:31.692449    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:31.728073    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:12:31.728088    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:12:31.742179    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:12:31.742190    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:12:31.757374    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:12:31.757386    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:12:31.774951    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:31.774965    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:31.799758    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:31.799768    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:31.837745    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:12:31.837753    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:12:30.577064    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:34.354363    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:35.579450    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:35.579671    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:35.601125    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:12:35.601232    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:35.616058    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:12:35.616129    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:35.628419    8606 logs.go:276] 2 containers: [cf751089cf65 cb43bc7ced85]
	I0717 11:12:35.628481    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:35.638872    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:12:35.638938    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:35.649436    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:12:35.649500    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:35.659873    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:12:35.659941    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:35.670461    8606 logs.go:276] 0 containers: []
	W0717 11:12:35.670472    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:35.670528    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:35.681468    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:12:35.681482    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:12:35.681486    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:12:35.699373    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:35.699382    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:35.723915    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:35.723922    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:35.728385    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:12:35.728390    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:12:35.742769    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:12:35.742779    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:12:35.758152    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:12:35.758167    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:12:35.770111    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:12:35.770122    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:12:35.781163    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:12:35.781175    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:12:35.796264    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:12:35.796274    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:12:35.807844    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:12:35.807855    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:35.820148    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:35.820159    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:35.853801    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:35.853811    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:35.896918    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:12:35.896932    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:12:38.413868    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:39.356757    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:39.357307    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:39.386577    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:12:39.386718    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:39.411821    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:12:39.411903    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:39.424762    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:12:39.424847    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:39.435979    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:12:39.436054    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:39.447999    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:12:39.448073    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:39.458938    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:12:39.459001    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:39.468860    8746 logs.go:276] 0 containers: []
	W0717 11:12:39.468873    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:39.468936    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:39.479873    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:12:39.479890    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:39.479896    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:39.519446    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:39.519458    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:39.527106    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:39.527117    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:39.561835    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:12:39.561847    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:12:39.576086    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:12:39.576096    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:12:39.595125    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:39.595138    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:39.618213    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:12:39.618221    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:12:39.633316    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:12:39.633326    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:12:39.645223    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:12:39.645233    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:12:39.667068    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:12:39.667081    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:12:39.679983    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:12:39.679994    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:39.691693    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:12:39.691704    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:12:39.706348    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:12:39.706358    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:12:39.717628    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:12:39.717638    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:12:39.729569    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:12:39.729581    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:12:39.754292    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:12:39.754305    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:12:43.414672    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:43.414859    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:43.441645    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:12:43.441726    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:43.456444    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:12:43.456520    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:43.468193    8606 logs.go:276] 2 containers: [cf751089cf65 cb43bc7ced85]
	I0717 11:12:43.468253    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:43.479013    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:12:43.479085    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:43.491103    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:12:43.491172    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:43.501590    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:12:43.501658    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:43.514536    8606 logs.go:276] 0 containers: []
	W0717 11:12:43.514546    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:43.514602    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:43.525483    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:12:43.525499    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:12:43.525504    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:12:43.544109    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:43.544119    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:43.568931    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:12:43.568940    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:12:43.583537    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:12:43.583547    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:12:43.601866    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:43.601877    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:43.636603    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:12:43.636617    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:12:43.658268    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:12:43.658278    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:12:43.669865    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:12:43.669875    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:12:43.684436    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:12:43.684448    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:12:43.696236    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:12:43.696247    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:12:43.708105    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:43.708116    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:43.743074    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:43.743094    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:43.748286    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:12:43.748296    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:42.267776    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:46.261782    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:47.270216    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:47.270385    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:47.283615    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:12:47.283696    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:47.294947    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:12:47.295015    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:47.305782    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:12:47.305864    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:47.316770    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:12:47.316846    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:47.326842    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:12:47.326905    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:47.337822    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:12:47.337886    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:47.348125    8746 logs.go:276] 0 containers: []
	W0717 11:12:47.348140    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:47.348195    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:47.358889    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:12:47.358906    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:12:47.358912    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:12:47.371588    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:12:47.371605    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:12:47.382475    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:47.382487    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:47.407883    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:47.407895    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:47.443223    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:12:47.443237    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:12:47.455493    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:12:47.455504    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:12:47.472490    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:12:47.472500    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:47.484534    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:12:47.484547    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:12:47.499070    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:12:47.499084    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:12:47.513909    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:12:47.513920    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:12:47.531862    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:12:47.531874    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:12:47.545270    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:47.545284    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:47.587312    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:12:47.587328    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:12:47.612587    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:12:47.612601    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:12:47.624232    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:47.624244    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:47.628588    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:12:47.628595    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:12:50.148704    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:51.264082    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:51.264338    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:51.287288    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:12:51.287407    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:51.303406    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:12:51.303481    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:51.316562    8606 logs.go:276] 2 containers: [cf751089cf65 cb43bc7ced85]
	I0717 11:12:51.316631    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:51.327844    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:12:51.327917    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:51.338524    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:12:51.338601    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:51.348644    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:12:51.348706    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:51.359231    8606 logs.go:276] 0 containers: []
	W0717 11:12:51.359242    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:51.359297    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:51.375529    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:12:51.375544    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:12:51.375550    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:12:51.389513    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:12:51.389525    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:12:51.405874    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:12:51.405885    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:12:51.425953    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:51.425963    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:51.430308    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:12:51.430317    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:12:51.444597    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:12:51.444607    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:12:51.456170    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:12:51.456181    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:12:51.467836    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:12:51.467848    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:12:51.482346    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:12:51.482359    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:12:51.493987    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:51.493998    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:51.518907    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:12:51.518918    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:51.530365    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:51.530375    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:51.564475    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:51.564484    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:54.105252    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:55.150960    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:55.151193    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:55.180439    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:12:55.180531    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:55.200209    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:12:55.200290    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:55.212298    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:12:55.212373    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:55.222960    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:12:55.223035    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:55.233499    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:12:55.233574    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:55.251201    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:12:55.251271    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:55.261996    8746 logs.go:276] 0 containers: []
	W0717 11:12:55.262008    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:55.262065    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:55.275729    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:12:55.275746    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:55.275751    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:55.313196    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:12:55.313209    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:12:55.324499    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:55.324510    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:55.358603    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:12:55.358615    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:12:55.372695    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:12:55.372706    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:12:55.387476    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:12:55.387486    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:12:55.400811    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:12:55.400825    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:12:55.419990    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:12:55.420003    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:12:55.432581    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:55.432594    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:55.437358    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:12:55.437365    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:12:55.461907    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:12:55.461918    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:12:55.475491    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:12:55.475502    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:12:55.491112    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:12:55.491122    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:12:55.509901    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:12:55.509912    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:12:55.521363    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:55.521373    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:55.544625    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:12:55.544632    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:59.107562    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:59.107753    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:59.120187    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:12:59.120267    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:59.131051    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:12:59.131124    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:59.141340    8606 logs.go:276] 2 containers: [cf751089cf65 cb43bc7ced85]
	I0717 11:12:59.141410    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:59.153497    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:12:59.153568    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:59.167637    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:12:59.167704    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:59.178135    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:12:59.178206    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:59.197695    8606 logs.go:276] 0 containers: []
	W0717 11:12:59.197707    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:59.197766    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:58.056920    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:59.208082    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:12:59.208098    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:12:59.208103    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:12:59.220033    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:12:59.220043    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:12:59.237554    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:12:59.237565    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:12:59.249467    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:59.249477    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:59.272934    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:12:59.272943    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:59.284522    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:59.284533    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:59.318832    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:59.318841    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:59.323546    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:59.323555    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:59.358532    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:12:59.358545    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:12:59.372218    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:12:59.372228    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:12:59.386084    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:12:59.386094    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:12:59.397607    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:12:59.397617    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:12:59.416801    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:12:59.416812    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:13:01.930817    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:03.058847    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:03.059053    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:03.084884    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:13:03.084998    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:03.109373    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:13:03.109444    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:03.120785    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:13:03.120858    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:03.131465    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:13:03.131560    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:03.142215    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:13:03.142297    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:03.153036    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:13:03.153102    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:03.163626    8746 logs.go:276] 0 containers: []
	W0717 11:13:03.163636    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:03.163697    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:03.174377    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:13:03.174394    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:13:03.174399    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:13:03.189064    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:13:03.189074    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:13:03.208201    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:13:03.208216    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:13:03.219861    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:03.219871    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:03.244364    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:03.244376    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:03.279120    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:13:03.279132    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:13:03.293353    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:13:03.293363    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:13:03.318224    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:13:03.318235    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:13:03.331201    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:13:03.331212    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:13:03.345494    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:13:03.345504    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:03.357596    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:03.357608    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:03.395491    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:03.395505    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:03.399535    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:13:03.399545    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:13:03.410588    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:13:03.410600    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:13:03.422487    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:13:03.422498    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:13:03.440054    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:13:03.440067    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:13:05.959149    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:06.933282    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:06.933618    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:06.965089    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:13:06.965209    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:06.985536    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:13:06.985633    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:07.000603    8606 logs.go:276] 2 containers: [cf751089cf65 cb43bc7ced85]
	I0717 11:13:07.000680    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:07.012538    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:13:07.012610    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:07.023677    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:13:07.023745    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:07.034438    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:13:07.034512    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:07.045743    8606 logs.go:276] 0 containers: []
	W0717 11:13:07.045755    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:07.045816    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:07.056285    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:13:07.056302    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:07.056308    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:07.089544    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:13:07.089552    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:13:07.106418    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:13:07.106428    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:13:07.120385    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:13:07.120399    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:13:07.135053    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:13:07.135062    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:13:07.148125    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:13:07.148136    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:13:07.160081    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:13:07.160093    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:07.171648    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:07.171657    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:07.176553    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:07.176563    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:07.212703    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:13:07.212714    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:13:07.224362    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:13:07.224373    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:13:07.236756    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:13:07.236767    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:13:07.260026    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:07.260036    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:10.959625    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:10.959824    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:10.977446    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:13:10.977540    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:10.990157    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:13:10.990235    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:11.005411    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:13:11.005476    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:11.016223    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:13:11.016301    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:11.027539    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:13:11.027601    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:11.039255    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:13:11.039318    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:11.049171    8746 logs.go:276] 0 containers: []
	W0717 11:13:11.049183    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:11.049246    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:11.060985    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:13:11.061004    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:13:11.061010    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:13:11.078687    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:13:11.078698    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:13:11.091265    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:11.091275    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:11.116078    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:11.116088    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:11.151424    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:13:11.151434    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:13:11.165512    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:13:11.165523    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:13:11.181074    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:13:11.181084    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:13:11.192331    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:13:11.192342    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:13:11.204696    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:11.204712    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:11.244651    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:11.244671    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:11.249537    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:13:11.249546    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:13:11.264367    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:13:11.264380    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:13:11.289167    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:13:11.289178    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:13:11.301641    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:13:11.301651    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:13:11.320146    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:13:11.320157    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:13:11.332162    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:13:11.332174    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:09.787834    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:13.846242    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:14.790308    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:14.790547    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:14.814743    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:13:14.814853    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:14.831315    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:13:14.831400    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:14.844796    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:13:14.844873    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:14.856371    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:13:14.856438    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:14.866858    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:13:14.866934    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:14.877399    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:13:14.877474    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:14.887688    8606 logs.go:276] 0 containers: []
	W0717 11:13:14.887699    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:14.887759    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:14.900002    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:13:14.900020    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:14.900026    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:14.965261    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:13:14.965275    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:13:14.976429    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:13:14.976439    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:13:14.988621    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:13:14.988633    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:13:15.006176    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:15.006186    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:15.010837    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:13:15.010847    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:13:15.025079    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:13:15.025093    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:13:15.036274    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:15.036286    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:15.071648    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:13:15.071658    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:13:15.083082    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:13:15.083094    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:13:15.094950    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:15.094961    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:15.120433    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:13:15.120440    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:15.131716    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:13:15.131726    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:13:15.148661    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:13:15.148673    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:13:15.163437    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:13:15.163447    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:13:17.677343    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:18.848958    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:18.849410    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:18.889346    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:13:18.889483    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:18.911798    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:13:18.911911    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:18.927062    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:13:18.927133    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:18.940092    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:13:18.940171    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:18.951219    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:13:18.951296    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:18.962082    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:13:18.962148    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:18.972535    8746 logs.go:276] 0 containers: []
	W0717 11:13:18.972546    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:18.972605    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:18.984012    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:13:18.984028    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:13:18.984035    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:13:18.995622    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:13:18.995635    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:13:19.007423    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:13:19.007437    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:13:19.024689    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:13:19.024701    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:13:19.048514    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:19.048526    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:19.087176    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:13:19.087196    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:13:19.103616    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:13:19.103627    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:13:19.119541    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:13:19.119552    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:13:19.132513    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:19.132525    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:19.156164    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:19.156174    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:19.160359    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:13:19.160366    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:13:19.190028    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:13:19.190039    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:13:19.207113    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:13:19.207124    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:13:19.221400    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:19.221411    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:19.256758    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:13:19.256770    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:13:19.275070    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:13:19.275081    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:21.788989    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:22.680049    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:22.680410    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:22.714478    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:13:22.714601    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:22.733929    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:13:22.734029    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:22.748404    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:13:22.748488    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:22.760336    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:13:22.760399    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:22.771306    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:13:22.771376    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:22.781545    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:13:22.781614    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:22.791870    8606 logs.go:276] 0 containers: []
	W0717 11:13:22.791882    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:22.791945    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:22.802373    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:13:22.802389    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:13:22.802395    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:13:22.814132    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:22.814141    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:22.837741    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:22.837749    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:22.872480    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:13:22.872494    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:13:22.887169    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:13:22.887179    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:13:22.902254    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:13:22.902270    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:13:22.913951    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:13:22.913964    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:13:22.927823    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:13:22.927835    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:13:22.939259    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:13:22.939269    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:13:22.950826    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:22.950836    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:22.985049    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:13:22.985059    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:13:22.996976    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:13:22.996985    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:13:23.036520    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:13:23.036531    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:23.048562    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:23.048574    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:23.052970    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:13:23.052976    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:13:26.791492    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:26.791806    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:26.832441    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:13:26.832570    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:26.852376    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:13:26.852474    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:26.866888    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:13:26.866973    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:26.881277    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:13:26.881351    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:26.892557    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:13:26.892624    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:26.905931    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:13:26.906001    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:26.917069    8746 logs.go:276] 0 containers: []
	W0717 11:13:26.917078    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:26.917130    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:26.928710    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:13:26.928739    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:26.928744    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:26.965280    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:13:26.965292    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:13:26.983789    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:13:26.983801    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:13:26.996817    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:13:26.996830    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:27.012913    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:27.012932    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:27.017902    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:27.017909    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:27.061044    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:13:27.061055    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:13:27.075563    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:13:27.075573    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:13:27.086818    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:13:27.086832    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:13:27.099479    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:13:27.099493    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:13:27.110797    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:27.110807    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:27.133348    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:13:27.133358    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:13:27.145832    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:13:27.145843    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:13:27.172076    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:13:27.172086    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:13:27.186592    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:13:27.186603    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:13:25.567807    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:27.205058    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:13:27.205069    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:13:29.731664    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:30.570552    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:30.570950    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:30.607716    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:13:30.607840    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:30.628754    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:13:30.628871    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:30.648304    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:13:30.648375    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:30.660695    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:13:30.660768    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:30.671607    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:13:30.671665    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:30.682724    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:13:30.682788    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:30.693312    8606 logs.go:276] 0 containers: []
	W0717 11:13:30.693325    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:30.693382    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:30.703832    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:13:30.703851    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:30.703855    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:30.737296    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:30.737303    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:30.741497    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:13:30.741506    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:13:30.753249    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:13:30.753260    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:13:30.765422    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:13:30.765434    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:30.781419    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:30.781430    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:30.816242    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:13:30.816252    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:13:30.831298    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:13:30.831308    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:13:30.843662    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:13:30.843673    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:13:30.862163    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:13:30.862176    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:13:30.879910    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:13:30.879922    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:13:30.894801    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:13:30.894813    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:13:30.911537    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:13:30.911547    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:13:30.927350    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:13:30.927361    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:13:30.939977    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:30.939986    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:33.467153    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:34.734061    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:34.734323    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:34.758854    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:13:34.758959    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:34.774993    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:13:34.775071    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:34.788045    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:13:34.788106    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:34.798986    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:13:34.799045    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:34.809568    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:13:34.809635    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:34.820031    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:13:34.820103    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:34.830109    8746 logs.go:276] 0 containers: []
	W0717 11:13:34.830121    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:34.830183    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:34.840903    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:13:34.840922    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:34.840928    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:34.880565    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:13:34.880576    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:13:34.897535    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:13:34.897548    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:13:34.908698    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:13:34.908710    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:13:34.927155    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:13:34.927165    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:13:34.944090    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:13:34.944101    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:13:34.961619    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:13:34.961629    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:13:34.986447    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:13:34.986461    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:13:34.998065    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:34.998076    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:35.036325    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:13:35.036337    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:13:35.048274    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:13:35.048285    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:13:35.060030    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:35.060042    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:35.084425    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:13:35.084435    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:35.096972    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:35.096984    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:35.101662    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:13:35.101672    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:13:35.117345    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:13:35.117355    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:13:38.469539    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:38.469684    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:38.488639    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:13:38.488721    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:38.502419    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:13:38.502490    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:38.514101    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:13:38.514163    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:38.524534    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:13:38.524605    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:38.535267    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:13:38.535339    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:38.545864    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:13:38.545929    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:38.555879    8606 logs.go:276] 0 containers: []
	W0717 11:13:38.555889    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:38.555942    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:38.566352    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:13:38.566371    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:13:38.566376    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:13:38.577597    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:38.577608    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:38.602569    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:13:38.602579    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:38.615456    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:13:38.615465    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:13:38.629292    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:13:38.629304    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:13:38.641187    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:13:38.641199    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:13:38.655870    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:13:38.655879    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:13:38.670660    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:38.670669    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:38.705308    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:13:38.705317    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:13:38.720596    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:13:38.720605    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:13:38.731720    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:13:38.731733    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:13:38.747572    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:13:38.747583    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:13:38.765734    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:38.765743    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:38.770385    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:38.770393    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:38.804408    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:13:38.804422    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:13:37.634548    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:41.318423    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:42.635815    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:42.635992    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:42.654361    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:13:42.654449    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:42.668106    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:13:42.668183    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:42.679539    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:13:42.679609    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:42.690233    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:13:42.690306    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:42.701092    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:13:42.701164    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:42.711524    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:13:42.711596    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:42.721632    8746 logs.go:276] 0 containers: []
	W0717 11:13:42.721646    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:42.721707    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:42.731841    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:13:42.731858    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:13:42.731865    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:13:42.746219    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:13:42.746236    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:13:42.757883    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:13:42.757897    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:13:42.784108    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:13:42.784120    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:13:42.796321    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:42.796335    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:42.820396    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:13:42.820404    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:42.832286    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:13:42.832296    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:13:42.844306    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:13:42.844316    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:13:42.862698    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:13:42.862709    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:13:42.880361    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:42.880371    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:42.917204    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:42.917212    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:42.921340    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:42.921347    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:42.957665    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:13:42.957675    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:13:42.978694    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:13:42.978703    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:13:42.993358    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:13:42.993369    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:13:43.009179    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:13:43.009189    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:13:45.523143    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:46.320894    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:46.321042    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:46.334947    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:13:46.335022    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:46.346147    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:13:46.346219    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:46.356616    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:13:46.356683    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:46.367173    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:13:46.367248    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:46.378321    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:13:46.378407    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:46.389189    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:13:46.389259    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:46.400276    8606 logs.go:276] 0 containers: []
	W0717 11:13:46.400286    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:46.400346    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:46.410739    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:13:46.410760    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:13:46.410766    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:13:46.426312    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:13:46.426322    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:13:46.440802    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:13:46.440816    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:13:46.454663    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:13:46.454672    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:13:46.466819    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:13:46.466830    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:13:46.479021    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:13:46.479033    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:13:46.491212    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:13:46.491222    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:13:46.511194    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:46.511205    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:46.536791    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:46.536798    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:46.571784    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:13:46.571794    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:13:46.584284    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:13:46.584294    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:13:46.595986    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:13:46.595997    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:13:46.610657    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:13:46.610669    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:46.623007    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:46.623017    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:46.656352    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:46.656360    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:49.162653    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:50.524769    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:50.524940    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:50.538392    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:13:50.538469    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:50.553515    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:13:50.553585    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:50.565777    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:13:50.565844    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:50.576210    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:13:50.576282    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:50.586644    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:13:50.586711    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:50.597191    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:13:50.597256    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:50.608654    8746 logs.go:276] 0 containers: []
	W0717 11:13:50.608667    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:50.608728    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:50.620379    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:13:50.620396    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:50.620402    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:50.657681    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:13:50.657690    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:13:50.676592    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:13:50.676605    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:13:50.693422    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:50.693433    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:50.717417    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:13:50.717425    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:50.729132    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:50.729143    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:50.733638    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:50.733645    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:50.768680    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:13:50.768691    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:13:50.780571    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:13:50.780581    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:13:50.791944    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:13:50.791956    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:13:50.802862    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:13:50.802875    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:13:50.824950    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:13:50.824961    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:13:50.846494    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:13:50.846506    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:13:50.861212    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:13:50.861225    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:13:50.873136    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:13:50.873148    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:13:50.897532    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:13:50.897543    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:13:54.163076    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:54.163199    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:54.177591    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:13:54.177675    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:54.189656    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:13:54.189718    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:53.412064    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:54.200070    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:13:54.200137    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:54.210707    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:13:54.210776    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:54.221538    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:13:54.221606    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:54.231443    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:13:54.231509    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:54.241771    8606 logs.go:276] 0 containers: []
	W0717 11:13:54.241782    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:54.241839    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:54.252289    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:13:54.252306    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:13:54.252311    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:13:54.269145    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:13:54.269157    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:13:54.281037    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:13:54.281048    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:13:54.292410    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:13:54.292421    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:13:54.337125    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:54.337136    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:54.372706    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:54.372716    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:54.376841    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:13:54.376846    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:13:54.388830    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:13:54.388842    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:13:54.403570    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:13:54.403581    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:13:54.418101    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:13:54.418111    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:13:54.432416    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:54.432429    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:54.457614    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:13:54.457626    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:54.469884    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:54.469899    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:54.505581    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:13:54.505595    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:13:54.524524    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:13:54.524534    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:13:57.044092    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:58.414499    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:58.414698    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:58.432872    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:13:58.432968    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:58.450593    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:13:58.450661    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:58.462227    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:13:58.462304    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:58.472877    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:13:58.472946    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:58.483452    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:13:58.483520    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:58.493929    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:13:58.493997    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:58.504636    8746 logs.go:276] 0 containers: []
	W0717 11:13:58.504654    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:58.504715    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:58.518333    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:13:58.518347    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:13:58.518352    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:13:58.530850    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:58.530859    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:58.569093    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:13:58.569101    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:13:58.587689    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:13:58.587700    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:13:58.599655    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:58.599666    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:58.636369    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:13:58.636380    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:58.649526    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:13:58.649540    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:13:58.661701    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:58.661712    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:58.685740    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:58.685748    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:58.689736    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:13:58.689745    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:13:58.714495    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:13:58.714506    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:13:58.727333    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:13:58.727345    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:13:58.745528    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:13:58.745539    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:13:58.757010    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:13:58.757020    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:13:58.773162    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:13:58.773173    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:13:58.786849    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:13:58.786857    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:14:01.303118    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:02.047051    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:02.047415    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:14:02.082926    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:14:02.083062    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:14:02.101979    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:14:02.102080    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:14:02.116405    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:14:02.116488    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:14:02.128837    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:14:02.128900    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:14:02.149978    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:14:02.150048    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:14:02.160772    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:14:02.160837    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:14:02.172832    8606 logs.go:276] 0 containers: []
	W0717 11:14:02.172845    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:14:02.172905    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:14:02.184226    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:14:02.184242    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:14:02.184248    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:14:02.188655    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:14:02.188665    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:14:02.200131    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:14:02.200142    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:14:02.212208    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:14:02.212218    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:14:02.246865    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:14:02.246876    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:14:02.261209    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:14:02.261219    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:14:02.272819    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:14:02.272828    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:14:02.288254    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:14:02.288263    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:14:02.311522    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:14:02.311532    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:14:02.326000    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:14:02.326009    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:14:02.337423    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:14:02.337433    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:14:02.349700    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:14:02.349711    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:14:02.368036    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:14:02.368052    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:14:02.407803    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:14:02.407815    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:14:02.420242    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:14:02.420253    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:14:06.305617    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:06.305778    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:14:06.320533    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:14:06.320610    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:14:06.332359    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:14:06.332428    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:14:06.343690    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:14:06.343758    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:14:06.354050    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:14:06.354127    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:14:06.364776    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:14:06.364849    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:14:06.375425    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:14:06.375486    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:14:06.385401    8746 logs.go:276] 0 containers: []
	W0717 11:14:06.385416    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:14:06.385470    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:14:06.396070    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:14:06.396092    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:14:06.396098    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:14:06.409834    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:14:06.409845    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:14:06.427399    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:14:06.427409    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:14:06.439924    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:14:06.439935    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:14:06.475235    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:14:06.475248    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:14:06.489839    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:14:06.489852    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:14:06.508133    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:14:06.508144    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:14:06.519873    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:14:06.519887    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:14:06.531552    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:14:06.531571    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:14:06.546705    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:14:06.546718    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:14:06.571799    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:14:06.571810    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:14:06.582793    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:14:06.582804    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:14:06.604494    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:14:06.604504    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:14:06.642144    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:14:06.642152    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:14:06.646116    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:14:06.646123    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:14:06.660686    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:14:06.660696    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:14:04.933758    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:09.174426    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:09.936483    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:09.936758    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:14:09.965447    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:14:09.965581    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:14:09.983924    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:14:09.984019    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:14:09.998287    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:14:09.998362    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:14:10.014554    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:14:10.014625    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:14:10.025590    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:14:10.025658    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:14:10.039836    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:14:10.039905    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:14:10.050741    8606 logs.go:276] 0 containers: []
	W0717 11:14:10.050754    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:14:10.050805    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:14:10.061985    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:14:10.062004    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:14:10.062015    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:14:10.067080    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:14:10.067090    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:14:10.081145    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:14:10.081157    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:14:10.097517    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:14:10.097527    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:14:10.121542    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:14:10.121554    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:14:10.137021    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:14:10.137036    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:14:10.152414    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:14:10.152427    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:14:10.166736    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:14:10.166749    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:14:10.201135    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:14:10.201149    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:14:10.216701    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:14:10.216711    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:14:10.228014    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:14:10.228028    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:14:10.242584    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:14:10.242598    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:14:10.264126    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:14:10.264136    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:14:10.275928    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:14:10.275938    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:14:10.287602    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:14:10.287612    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:14:12.829979    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:14.174879    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:14.175088    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:14:14.192914    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:14:14.192997    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:14:14.213874    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:14:14.213945    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:14:14.224177    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:14:14.224248    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:14:14.235973    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:14:14.236045    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:14:14.246702    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:14:14.246769    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:14:14.257315    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:14:14.257390    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:14:14.267891    8746 logs.go:276] 0 containers: []
	W0717 11:14:14.267904    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:14:14.267962    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:14:14.278883    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:14:14.278902    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:14:14.278908    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:14:14.318270    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:14:14.318280    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:14:14.356414    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:14:14.356426    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:14:14.381552    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:14:14.381563    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:14:14.405156    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:14:14.405166    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:14:14.423309    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:14:14.423319    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:14:14.452466    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:14:14.452486    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:14:14.459527    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:14:14.459541    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:14:14.474886    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:14:14.474903    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:14:14.493559    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:14:14.493569    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:14:14.505290    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:14:14.505303    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:14:14.516821    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:14:14.516833    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:14:14.529700    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:14:14.529715    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:14:14.552686    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:14:14.552694    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:14:14.566739    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:14:14.566751    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:14:14.578795    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:14:14.578807    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:14:17.093366    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:17.832804    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:17.833093    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:14:17.863101    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:14:17.863234    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:14:17.882573    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:14:17.882652    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:14:17.896991    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:14:17.897059    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:14:17.908651    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:14:17.908714    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:14:17.919428    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:14:17.919485    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:14:17.929736    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:14:17.929801    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:14:17.940420    8606 logs.go:276] 0 containers: []
	W0717 11:14:17.940431    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:14:17.940487    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:14:17.950876    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:14:17.950898    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:14:17.950904    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:14:17.963063    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:14:17.963077    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:14:17.974916    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:14:17.974927    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:14:17.999228    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:14:17.999236    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:14:18.034117    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:14:18.034128    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:14:18.046532    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:14:18.046543    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:14:18.083095    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:14:18.083106    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:14:18.098783    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:14:18.098794    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:14:18.112637    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:14:18.112647    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:14:18.127689    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:14:18.127698    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:14:18.132384    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:14:18.132390    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:14:18.145450    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:14:18.145459    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:14:18.157273    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:14:18.157283    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:14:18.168919    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:14:18.168933    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:14:18.186624    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:14:18.186637    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:14:22.095743    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:22.096035    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:14:22.122641    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:14:22.122792    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:14:22.139555    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:14:22.139629    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:14:22.153269    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:14:22.153342    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:14:22.164972    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:14:22.165045    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:14:22.175378    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:14:22.175447    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:14:22.186110    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:14:22.186180    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:14:20.700428    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:22.196547    8746 logs.go:276] 0 containers: []
	W0717 11:14:22.196675    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:14:22.196734    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:14:22.215865    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:14:22.215882    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:14:22.215887    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:14:22.235300    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:14:22.235316    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:14:22.256695    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:14:22.256705    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:14:22.272056    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:14:22.272068    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:14:22.295851    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:14:22.295860    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:14:22.309917    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:14:22.309931    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:14:22.314003    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:14:22.314009    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:14:22.327892    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:14:22.327907    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:14:22.363227    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:14:22.363242    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:14:22.377435    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:14:22.377445    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:14:22.390704    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:14:22.390718    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:14:22.408916    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:14:22.408929    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:14:22.448991    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:14:22.449003    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:14:22.474562    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:14:22.474577    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:14:22.487337    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:14:22.487347    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:14:22.501870    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:14:22.501880    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:14:25.015545    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:25.702848    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:25.703076    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:14:25.728061    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:14:25.728167    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:14:25.746337    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:14:25.746417    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:14:25.760520    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:14:25.760589    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:14:25.776630    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:14:25.776690    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:14:25.786789    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:14:25.786845    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:14:25.797369    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:14:25.797439    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:14:25.807462    8606 logs.go:276] 0 containers: []
	W0717 11:14:25.807478    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:14:25.807540    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:14:25.817975    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:14:25.817992    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:14:25.817997    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:14:25.829126    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:14:25.829136    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:14:25.864146    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:14:25.864156    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:14:25.875481    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:14:25.875495    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:14:25.887370    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:14:25.887384    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:14:25.898840    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:14:25.898852    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:14:25.913576    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:14:25.913584    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:14:25.918234    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:14:25.918250    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:14:25.932594    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:14:25.932606    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:14:25.946851    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:14:25.946864    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:14:25.960659    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:14:25.960672    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:14:25.973418    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:14:25.973428    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:14:26.008100    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:14:26.008109    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:14:26.019881    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:14:26.019891    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:14:26.038518    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:14:26.038529    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:14:28.564642    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:30.017411    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:30.017548    8746 kubeadm.go:597] duration metric: took 4m3.473516958s to restartPrimaryControlPlane
	W0717 11:14:30.017674    8746 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 11:14:30.017738    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0717 11:14:31.078532    8746 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.06077625s)
	I0717 11:14:31.078600    8746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 11:14:31.083445    8746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 11:14:31.086015    8746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 11:14:31.088674    8746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 11:14:31.088681    8746 kubeadm.go:157] found existing configuration files:
	
	I0717 11:14:31.088707    8746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/admin.conf
	I0717 11:14:31.091370    8746 kubeadm.go:163] "https://control-plane.minikube.internal:51504" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 11:14:31.091395    8746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 11:14:31.093837    8746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/kubelet.conf
	I0717 11:14:31.096398    8746 kubeadm.go:163] "https://control-plane.minikube.internal:51504" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 11:14:31.096418    8746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 11:14:31.099400    8746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/controller-manager.conf
	I0717 11:14:31.101927    8746 kubeadm.go:163] "https://control-plane.minikube.internal:51504" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 11:14:31.101948    8746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 11:14:31.104553    8746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/scheduler.conf
	I0717 11:14:31.107644    8746 kubeadm.go:163] "https://control-plane.minikube.internal:51504" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 11:14:31.107665    8746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 11:14:31.110617    8746 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 11:14:31.128645    8746 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0717 11:14:31.128672    8746 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 11:14:31.178772    8746 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 11:14:31.178825    8746 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 11:14:31.178901    8746 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 11:14:31.231927    8746 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 11:14:31.236096    8746 out.go:204]   - Generating certificates and keys ...
	I0717 11:14:31.236132    8746 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 11:14:31.236177    8746 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 11:14:31.236407    8746 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 11:14:31.236533    8746 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 11:14:31.236605    8746 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 11:14:31.236669    8746 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 11:14:31.236722    8746 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 11:14:31.236761    8746 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 11:14:31.236826    8746 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 11:14:31.236907    8746 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 11:14:31.236941    8746 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 11:14:31.236989    8746 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 11:14:31.263532    8746 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 11:14:31.325135    8746 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 11:14:31.453902    8746 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 11:14:31.548883    8746 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 11:14:31.587813    8746 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 11:14:31.588133    8746 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 11:14:31.588186    8746 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 11:14:31.674021    8746 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 11:14:31.681932    8746 out.go:204]   - Booting up control plane ...
	I0717 11:14:31.682044    8746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 11:14:31.682105    8746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 11:14:31.682152    8746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 11:14:31.682207    8746 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 11:14:31.682305    8746 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 11:14:33.567053    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:33.567159    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:14:33.578517    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:14:33.578588    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:14:33.601288    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:14:33.601365    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:14:33.613013    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:14:33.613094    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:14:33.624824    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:14:33.624894    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:14:33.636043    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:14:33.636115    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:14:33.647494    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:14:33.647564    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:14:33.658646    8606 logs.go:276] 0 containers: []
	W0717 11:14:33.658657    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:14:33.658717    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:14:33.670553    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:14:33.670572    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:14:33.670577    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:14:33.683694    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:14:33.683707    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:14:33.696220    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:14:33.696231    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:14:33.708647    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:14:33.708660    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:14:33.724695    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:14:33.724706    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:14:33.735945    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:14:33.735955    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:14:33.773929    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:14:33.773945    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:14:33.788798    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:14:33.788812    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:14:33.807779    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:14:33.807795    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:14:33.812345    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:14:33.812354    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:14:33.825393    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:14:33.825410    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:14:33.842761    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:14:33.842773    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:14:33.867946    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:14:33.867959    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:14:33.903956    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:14:33.903978    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:14:33.917008    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:14:33.917026    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:14:36.185013    8746 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.504856 seconds
	I0717 11:14:36.185158    8746 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 11:14:36.190900    8746 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 11:14:36.704398    8746 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 11:14:36.704525    8746 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-058000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 11:14:37.209766    8746 kubeadm.go:310] [bootstrap-token] Using token: sfup86.4bhq6tagj8ecwh82
	I0717 11:14:37.214652    8746 out.go:204]   - Configuring RBAC rules ...
	I0717 11:14:37.214725    8746 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 11:14:37.214784    8746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 11:14:37.216564    8746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 11:14:37.221234    8746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 11:14:37.222350    8746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 11:14:37.223519    8746 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 11:14:37.227020    8746 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 11:14:37.388927    8746 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 11:14:37.615566    8746 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 11:14:37.616280    8746 kubeadm.go:310] 
	I0717 11:14:37.616314    8746 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 11:14:37.616318    8746 kubeadm.go:310] 
	I0717 11:14:37.616356    8746 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 11:14:37.616360    8746 kubeadm.go:310] 
	I0717 11:14:37.616373    8746 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 11:14:37.616407    8746 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 11:14:37.616453    8746 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 11:14:37.616459    8746 kubeadm.go:310] 
	I0717 11:14:37.616499    8746 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 11:14:37.616502    8746 kubeadm.go:310] 
	I0717 11:14:37.616562    8746 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 11:14:37.616566    8746 kubeadm.go:310] 
	I0717 11:14:37.616630    8746 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 11:14:37.616686    8746 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 11:14:37.616728    8746 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 11:14:37.616732    8746 kubeadm.go:310] 
	I0717 11:14:37.616785    8746 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 11:14:37.616857    8746 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 11:14:37.616863    8746 kubeadm.go:310] 
	I0717 11:14:37.616946    8746 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sfup86.4bhq6tagj8ecwh82 \
	I0717 11:14:37.617014    8746 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c24be85cc8a3b21770f1d422f860354652361b15e4e8167266dbe73d5c2037be \
	I0717 11:14:37.617027    8746 kubeadm.go:310] 	--control-plane 
	I0717 11:14:37.617030    8746 kubeadm.go:310] 
	I0717 11:14:37.617073    8746 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 11:14:37.617076    8746 kubeadm.go:310] 
	I0717 11:14:37.617113    8746 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sfup86.4bhq6tagj8ecwh82 \
	I0717 11:14:37.617238    8746 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c24be85cc8a3b21770f1d422f860354652361b15e4e8167266dbe73d5c2037be 
	I0717 11:14:37.617292    8746 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 11:14:37.617300    8746 cni.go:84] Creating CNI manager for ""
	I0717 11:14:37.617310    8746 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:14:37.625419    8746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 11:14:37.629473    8746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 11:14:37.632878    8746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 11:14:37.638144    8746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 11:14:37.638194    8746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 11:14:37.638207    8746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-058000 minikube.k8s.io/updated_at=2024_07_17T11_14_37_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=stopped-upgrade-058000 minikube.k8s.io/primary=true
	I0717 11:14:37.641387    8746 ops.go:34] apiserver oom_adj: -16
	I0717 11:14:37.683675    8746 kubeadm.go:1113] duration metric: took 45.52175ms to wait for elevateKubeSystemPrivileges
	I0717 11:14:37.683752    8746 kubeadm.go:394] duration metric: took 4m11.152575458s to StartCluster
	I0717 11:14:37.683766    8746 settings.go:142] acquiring lock: {Name:mkb2460e5e181fb6243e4d9c07c303cabf02ebce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:14:37.683855    8746 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:14:37.684262    8746 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/kubeconfig: {Name:mk593058234481727c8f9c6b6ce8d5b26e4d4302 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:14:37.684479    8746 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:14:37.684565    8746 config.go:182] Loaded profile config "stopped-upgrade-058000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:14:37.684541    8746 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 11:14:37.684579    8746 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-058000"
	I0717 11:14:37.684591    8746 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-058000"
	W0717 11:14:37.684594    8746 addons.go:243] addon storage-provisioner should already be in state true
	I0717 11:14:37.684598    8746 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-058000"
	I0717 11:14:37.684605    8746 host.go:66] Checking if "stopped-upgrade-058000" exists ...
	I0717 11:14:37.684610    8746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-058000"
	I0717 11:14:37.685774    8746 kapi.go:59] client config for stopped-upgrade-058000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/client.key", CAFile:"/Users/jenkins/minikube-integration/19282-6331/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106267730), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 11:14:37.685908    8746 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-058000"
	W0717 11:14:37.685913    8746 addons.go:243] addon default-storageclass should already be in state true
	I0717 11:14:37.685920    8746 host.go:66] Checking if "stopped-upgrade-058000" exists ...
	I0717 11:14:37.688531    8746 out.go:177] * Verifying Kubernetes components...
	I0717 11:14:37.688834    8746 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 11:14:37.692606    8746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 11:14:37.692615    8746 sshutil.go:53] new ssh client: &{IP:localhost Port:51470 SSHKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/stopped-upgrade-058000/id_rsa Username:docker}
	I0717 11:14:37.696395    8746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:14:36.449908    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:37.700408    8746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:14:37.704499    8746 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 11:14:37.704504    8746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 11:14:37.704510    8746 sshutil.go:53] new ssh client: &{IP:localhost Port:51470 SSHKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/stopped-upgrade-058000/id_rsa Username:docker}
	I0717 11:14:37.793394    8746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 11:14:37.798729    8746 api_server.go:52] waiting for apiserver process to appear ...
	I0717 11:14:37.798771    8746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:14:37.802545    8746 api_server.go:72] duration metric: took 118.055375ms to wait for apiserver process to appear ...
	I0717 11:14:37.802552    8746 api_server.go:88] waiting for apiserver healthz status ...
	I0717 11:14:37.802558    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:37.827033    8746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 11:14:37.841093    8746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 11:14:41.452374    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:41.452813    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:14:41.484783    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:14:41.484910    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:14:41.504670    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:14:41.504753    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:14:41.519310    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:14:41.519387    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:14:41.531547    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:14:41.531617    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:14:41.542354    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:14:41.542417    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:14:41.553536    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:14:41.553603    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:14:41.564185    8606 logs.go:276] 0 containers: []
	W0717 11:14:41.564197    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:14:41.564253    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:14:41.574490    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:14:41.574507    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:14:41.574513    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:14:41.593982    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:14:41.593995    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:14:41.612551    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:14:41.612563    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:14:41.626802    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:14:41.626812    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:14:41.640380    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:14:41.640390    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:14:41.652813    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:14:41.652826    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:14:41.665180    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:14:41.665191    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:14:41.701024    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:14:41.701039    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:14:41.712619    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:14:41.712630    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:14:41.724890    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:14:41.724901    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:14:41.730097    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:14:41.730107    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:14:41.742672    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:14:41.742682    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:14:41.755570    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:14:41.755581    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:14:41.778221    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:14:41.778228    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:14:41.811281    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:14:41.811291    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:14:42.804782    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:42.804871    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:44.327934    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:47.805733    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:47.805797    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:49.330540    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:49.330644    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:14:49.342103    8606 logs.go:276] 1 containers: [fa0dd532dea1]
	I0717 11:14:49.342175    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:14:49.353734    8606 logs.go:276] 1 containers: [23d608d9ea41]
	I0717 11:14:49.353805    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:14:49.364347    8606 logs.go:276] 4 containers: [09538c145b0c cf1551078f06 cf751089cf65 cb43bc7ced85]
	I0717 11:14:49.364420    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:14:49.375147    8606 logs.go:276] 1 containers: [1f063d50415d]
	I0717 11:14:49.375221    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:14:49.385859    8606 logs.go:276] 1 containers: [cd6505a7f35a]
	I0717 11:14:49.385927    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:14:49.396316    8606 logs.go:276] 1 containers: [55a8799c901d]
	I0717 11:14:49.396386    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:14:49.406338    8606 logs.go:276] 0 containers: []
	W0717 11:14:49.406353    8606 logs.go:278] No container was found matching "kindnet"
	I0717 11:14:49.406412    8606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:14:49.421532    8606 logs.go:276] 1 containers: [926cd5eb774c]
	I0717 11:14:49.421549    8606 logs.go:123] Gathering logs for coredns [cf1551078f06] ...
	I0717 11:14:49.421554    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1551078f06"
	I0717 11:14:49.433239    8606 logs.go:123] Gathering logs for storage-provisioner [926cd5eb774c] ...
	I0717 11:14:49.433250    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926cd5eb774c"
	I0717 11:14:49.446530    8606 logs.go:123] Gathering logs for dmesg ...
	I0717 11:14:49.446541    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:14:49.451534    8606 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:14:49.451541    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:14:49.488441    8606 logs.go:123] Gathering logs for kube-proxy [cd6505a7f35a] ...
	I0717 11:14:49.488451    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6505a7f35a"
	I0717 11:14:49.508985    8606 logs.go:123] Gathering logs for kube-controller-manager [55a8799c901d] ...
	I0717 11:14:49.508999    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55a8799c901d"
	I0717 11:14:49.534536    8606 logs.go:123] Gathering logs for kubelet ...
	I0717 11:14:49.534549    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:14:49.570634    8606 logs.go:123] Gathering logs for etcd [23d608d9ea41] ...
	I0717 11:14:49.570646    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23d608d9ea41"
	I0717 11:14:49.584351    8606 logs.go:123] Gathering logs for coredns [09538c145b0c] ...
	I0717 11:14:49.584363    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09538c145b0c"
	I0717 11:14:49.596040    8606 logs.go:123] Gathering logs for Docker ...
	I0717 11:14:49.596050    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:14:49.619222    8606 logs.go:123] Gathering logs for container status ...
	I0717 11:14:49.619234    8606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:14:49.630851    8606 logs.go:123] Gathering logs for kube-apiserver [fa0dd532dea1] ...
	I0717 11:14:49.630862    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa0dd532dea1"
	I0717 11:14:49.645381    8606 logs.go:123] Gathering logs for coredns [cf751089cf65] ...
	I0717 11:14:49.645392    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf751089cf65"
	I0717 11:14:49.657617    8606 logs.go:123] Gathering logs for coredns [cb43bc7ced85] ...
	I0717 11:14:49.657628    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb43bc7ced85"
	I0717 11:14:49.671200    8606 logs.go:123] Gathering logs for kube-scheduler [1f063d50415d] ...
	I0717 11:14:49.671211    8606 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f063d50415d"
	I0717 11:14:52.188706    8606 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:57.191007    8606 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:57.195555    8606 out.go:177] 
	I0717 11:14:52.806332    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:52.806363    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0717 11:14:57.199529    8606 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0717 11:14:57.199537    8606 out.go:239] * 
	W0717 11:14:57.200269    8606 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:14:57.211366    8606 out.go:177] 
	I0717 11:14:57.807059    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:57.807082    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:15:02.807909    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:15:02.807935    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:15:07.808501    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:15:07.808540    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0717 11:15:08.165063    8746 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0717 11:15:08.168302    8746 out.go:177] * Enabled addons: storage-provisioner
	I0717 11:15:08.180318    8746 addons.go:510] duration metric: took 30.495729417s for enable addons: enabled=[storage-provisioner]
	
	
	==> Docker <==
	-- Journal begins at Wed 2024-07-17 18:06:06 UTC, ends at Wed 2024-07-17 18:15:13 UTC. --
	Jul 17 18:14:57 running-upgrade-891000 dockerd[3194]: time="2024-07-17T18:14:57.365125618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 18:14:57 running-upgrade-891000 dockerd[3194]: time="2024-07-17T18:14:57.365218197Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8254ff0bc8d63f2cab24b075631a7eb2a21a62a95abd7506dde6df97f5a885e4 pid=18588 runtime=io.containerd.runc.v2
	Jul 17 18:14:57 running-upgrade-891000 cri-dockerd[3039]: time="2024-07-17T18:14:57Z" level=error msg="ContainerStats resp: {0x40008be440 linux}"
	Jul 17 18:14:57 running-upgrade-891000 cri-dockerd[3039]: time="2024-07-17T18:14:57Z" level=error msg="ContainerStats resp: {0x400099f540 linux}"
	Jul 17 18:14:58 running-upgrade-891000 cri-dockerd[3039]: time="2024-07-17T18:14:58Z" level=error msg="ContainerStats resp: {0x40005f95c0 linux}"
	Jul 17 18:14:59 running-upgrade-891000 cri-dockerd[3039]: time="2024-07-17T18:14:59Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 17 18:14:59 running-upgrade-891000 cri-dockerd[3039]: time="2024-07-17T18:14:59Z" level=error msg="ContainerStats resp: {0x400086c640 linux}"
	Jul 17 18:14:59 running-upgrade-891000 cri-dockerd[3039]: time="2024-07-17T18:14:59Z" level=error msg="ContainerStats resp: {0x400086ca80 linux}"
	Jul 17 18:14:59 running-upgrade-891000 cri-dockerd[3039]: time="2024-07-17T18:14:59Z" level=error msg="ContainerStats resp: {0x400086cdc0 linux}"
	Jul 17 18:14:59 running-upgrade-891000 cri-dockerd[3039]: time="2024-07-17T18:14:59Z" level=error msg="ContainerStats resp: {0x40008d8e00 linux}"
	Jul 17 18:14:59 running-upgrade-891000 cri-dockerd[3039]: time="2024-07-17T18:14:59Z" level=error msg="ContainerStats resp: {0x40008d9280 linux}"
	Jul 17 18:14:59 running-upgrade-891000 cri-dockerd[3039]: time="2024-07-17T18:14:59Z" level=error msg="ContainerStats resp: {0x40008d96c0 linux}"
	Jul 17 18:14:59 running-upgrade-891000 cri-dockerd[3039]: time="2024-07-17T18:14:59Z" level=error msg="ContainerStats resp: {0x400083c380 linux}"
	Jul 17 18:15:04 running-upgrade-891000 cri-dockerd[3039]: time="2024-07-17T18:15:04Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 17 18:15:09 running-upgrade-891000 cri-dockerd[3039]: time="2024-07-17T18:15:09Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 17 18:15:09 running-upgrade-891000 cri-dockerd[3039]: time="2024-07-17T18:15:09Z" level=error msg="ContainerStats resp: {0x400083d880 linux}"
	Jul 17 18:15:09 running-upgrade-891000 cri-dockerd[3039]: time="2024-07-17T18:15:09Z" level=error msg="ContainerStats resp: {0x4000548b80 linux}"
	Jul 17 18:15:10 running-upgrade-891000 cri-dockerd[3039]: time="2024-07-17T18:15:10Z" level=error msg="ContainerStats resp: {0x400086c840 linux}"
	Jul 17 18:15:11 running-upgrade-891000 cri-dockerd[3039]: time="2024-07-17T18:15:11Z" level=error msg="ContainerStats resp: {0x40008d8700 linux}"
	Jul 17 18:15:11 running-upgrade-891000 cri-dockerd[3039]: time="2024-07-17T18:15:11Z" level=error msg="ContainerStats resp: {0x40008d9600 linux}"
	Jul 17 18:15:11 running-upgrade-891000 cri-dockerd[3039]: time="2024-07-17T18:15:11Z" level=error msg="ContainerStats resp: {0x40008d9a00 linux}"
	Jul 17 18:15:11 running-upgrade-891000 cri-dockerd[3039]: time="2024-07-17T18:15:11Z" level=error msg="ContainerStats resp: {0x400086d100 linux}"
	Jul 17 18:15:11 running-upgrade-891000 cri-dockerd[3039]: time="2024-07-17T18:15:11Z" level=error msg="ContainerStats resp: {0x400086d5c0 linux}"
	Jul 17 18:15:11 running-upgrade-891000 cri-dockerd[3039]: time="2024-07-17T18:15:11Z" level=error msg="ContainerStats resp: {0x4000754780 linux}"
	Jul 17 18:15:11 running-upgrade-891000 cri-dockerd[3039]: time="2024-07-17T18:15:11Z" level=error msg="ContainerStats resp: {0x4000358040 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	8254ff0bc8d63       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   1c5e2c1e52aab
	feb7c28e89d16       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   67eab81487498
	09538c145b0cb       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   67eab81487498
	cf1551078f064       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   1c5e2c1e52aab
	926cd5eb774c3       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   7d6eeb52db372
	cd6505a7f35a5       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   4429a16d612c2
	1f063d50415d1       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   bcb088b65394e
	23d608d9ea413       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   cab1ef221211a
	fa0dd532dea1e       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   d8a587c1cce36
	55a8799c901d3       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   c381f270bf3a7
	
	
	==> coredns [09538c145b0c] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1999931956592896982.6682923152376118773. HINFO: read udp 10.244.0.3:57734->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1999931956592896982.6682923152376118773. HINFO: read udp 10.244.0.3:55205->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1999931956592896982.6682923152376118773. HINFO: read udp 10.244.0.3:33748->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1999931956592896982.6682923152376118773. HINFO: read udp 10.244.0.3:33065->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1999931956592896982.6682923152376118773. HINFO: read udp 10.244.0.3:35038->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1999931956592896982.6682923152376118773. HINFO: read udp 10.244.0.3:48890->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1999931956592896982.6682923152376118773. HINFO: read udp 10.244.0.3:45938->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1999931956592896982.6682923152376118773. HINFO: read udp 10.244.0.3:44245->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1999931956592896982.6682923152376118773. HINFO: read udp 10.244.0.3:46004->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1999931956592896982.6682923152376118773. HINFO: read udp 10.244.0.3:35666->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8254ff0bc8d6] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6902831678496167143.8523278574545323775. HINFO: read udp 10.244.0.2:55826->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6902831678496167143.8523278574545323775. HINFO: read udp 10.244.0.2:35717->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6902831678496167143.8523278574545323775. HINFO: read udp 10.244.0.2:41781->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6902831678496167143.8523278574545323775. HINFO: read udp 10.244.0.2:57499->10.0.2.3:53: i/o timeout
	
	
	==> coredns [cf1551078f06] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1285484522640319695.8877087107256010120. HINFO: read udp 10.244.0.2:37745->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1285484522640319695.8877087107256010120. HINFO: read udp 10.244.0.2:44708->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1285484522640319695.8877087107256010120. HINFO: read udp 10.244.0.2:48935->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1285484522640319695.8877087107256010120. HINFO: read udp 10.244.0.2:38664->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1285484522640319695.8877087107256010120. HINFO: read udp 10.244.0.2:33353->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1285484522640319695.8877087107256010120. HINFO: read udp 10.244.0.2:51327->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1285484522640319695.8877087107256010120. HINFO: read udp 10.244.0.2:39371->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1285484522640319695.8877087107256010120. HINFO: read udp 10.244.0.2:55586->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1285484522640319695.8877087107256010120. HINFO: read udp 10.244.0.2:45982->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1285484522640319695.8877087107256010120. HINFO: read udp 10.244.0.2:54407->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [feb7c28e89d1] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2170819941050770732.7263749013472018222. HINFO: read udp 10.244.0.3:48801->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2170819941050770732.7263749013472018222. HINFO: read udp 10.244.0.3:47038->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2170819941050770732.7263749013472018222. HINFO: read udp 10.244.0.3:34242->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2170819941050770732.7263749013472018222. HINFO: read udp 10.244.0.3:40574->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-891000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-891000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=running-upgrade-891000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T11_10_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:10:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-891000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:15:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:10:56 +0000   Wed, 17 Jul 2024 18:10:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:10:56 +0000   Wed, 17 Jul 2024 18:10:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:10:56 +0000   Wed, 17 Jul 2024 18:10:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:10:56 +0000   Wed, 17 Jul 2024 18:10:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-891000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a318c9339564df0bf3371cb08ee7bf6
	  System UUID:                3a318c9339564df0bf3371cb08ee7bf6
	  Boot ID:                    0fa35bd0-d639-4b4b-a6d9-7090b03a85f6
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-f9g7j                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-qt986                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-891000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-891000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-891000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-mpn52                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-891000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m2s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-891000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-891000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-891000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-891000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-891000 event: Registered Node running-upgrade-891000 in Controller
	
	
	==> dmesg <==
	[  +1.851206] systemd-fstab-generator[875]: Ignoring "noauto" for root device
	[  +0.077231] systemd-fstab-generator[886]: Ignoring "noauto" for root device
	[  +0.080126] systemd-fstab-generator[897]: Ignoring "noauto" for root device
	[  +1.143434] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.086953] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
	[  +0.081532] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +2.544108] systemd-fstab-generator[1287]: Ignoring "noauto" for root device
	[  +9.649328] systemd-fstab-generator[1913]: Ignoring "noauto" for root device
	[  +2.741340] systemd-fstab-generator[2193]: Ignoring "noauto" for root device
	[  +0.143139] systemd-fstab-generator[2226]: Ignoring "noauto" for root device
	[  +0.091469] systemd-fstab-generator[2237]: Ignoring "noauto" for root device
	[  +0.100093] systemd-fstab-generator[2250]: Ignoring "noauto" for root device
	[  +2.389528] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.211114] systemd-fstab-generator[2996]: Ignoring "noauto" for root device
	[  +0.086932] systemd-fstab-generator[3007]: Ignoring "noauto" for root device
	[  +0.085040] systemd-fstab-generator[3018]: Ignoring "noauto" for root device
	[  +0.089236] systemd-fstab-generator[3032]: Ignoring "noauto" for root device
	[  +2.226905] systemd-fstab-generator[3181]: Ignoring "noauto" for root device
	[  +2.902744] systemd-fstab-generator[3659]: Ignoring "noauto" for root device
	[  +2.116280] systemd-fstab-generator[3938]: Ignoring "noauto" for root device
	[Jul17 18:07] kauditd_printk_skb: 68 callbacks suppressed
	[Jul17 18:10] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.354937] systemd-fstab-generator[11743]: Ignoring "noauto" for root device
	[  +5.643309] systemd-fstab-generator[12328]: Ignoring "noauto" for root device
	[  +0.475731] systemd-fstab-generator[12462]: Ignoring "noauto" for root device
	
	
	==> etcd [23d608d9ea41] <==
	{"level":"info","ts":"2024-07-17T18:10:51.770Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-17T18:10:51.770Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-17T18:10:51.778Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T18:10:51.778Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-17T18:10:51.778Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-17T18:10:51.778Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T18:10:51.778Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T18:10:52.261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-17T18:10:52.261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-17T18:10:52.261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-17T18:10:52.261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-17T18:10:52.261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-17T18:10:52.261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-17T18:10:52.261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-17T18:10:52.261Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:10:52.261Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:10:52.262Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:10:52.262Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:10:52.262Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-891000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T18:10:52.262Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:10:52.262Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T18:10:52.262Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T18:10:52.262Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:10:52.262Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-07-17T18:10:52.263Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:15:13 up 9 min,  0 users,  load average: 0.24, 0.33, 0.18
	Linux running-upgrade-891000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [fa0dd532dea1] <==
	I0717 18:10:53.553752       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0717 18:10:53.553768       1 cache.go:39] Caches are synced for autoregister controller
	I0717 18:10:53.554629       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0717 18:10:53.554832       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0717 18:10:53.556114       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 18:10:53.556210       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 18:10:53.612507       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0717 18:10:54.284819       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 18:10:54.464381       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 18:10:54.468151       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 18:10:54.468197       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 18:10:54.632639       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 18:10:54.642349       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 18:10:54.731135       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0717 18:10:54.733246       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0717 18:10:54.733612       1 controller.go:611] quota admission added evaluator for: endpoints
	I0717 18:10:54.734866       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 18:10:55.624811       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0717 18:10:55.980878       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0717 18:10:55.984180       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0717 18:10:56.011467       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0717 18:10:56.033608       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 18:11:09.546776       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0717 18:11:09.646429       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0717 18:11:10.762604       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [55a8799c901d] <==
	I0717 18:11:08.745484       1 shared_informer.go:262] Caches are synced for cronjob
	I0717 18:11:08.745560       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0717 18:11:08.745637       1 shared_informer.go:262] Caches are synced for deployment
	I0717 18:11:08.745862       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0717 18:11:08.746298       1 shared_informer.go:262] Caches are synced for crt configmap
	I0717 18:11:08.749455       1 shared_informer.go:262] Caches are synced for namespace
	I0717 18:11:08.830539       1 shared_informer.go:262] Caches are synced for disruption
	I0717 18:11:08.830556       1 shared_informer.go:262] Caches are synced for daemon sets
	I0717 18:11:08.830557       1 disruption.go:371] Sending events to api server.
	I0717 18:11:08.901061       1 shared_informer.go:262] Caches are synced for resource quota
	I0717 18:11:08.907601       1 shared_informer.go:262] Caches are synced for ephemeral
	I0717 18:11:08.934765       1 shared_informer.go:262] Caches are synced for persistent volume
	I0717 18:11:08.945274       1 shared_informer.go:262] Caches are synced for PVC protection
	I0717 18:11:08.945408       1 shared_informer.go:262] Caches are synced for PV protection
	I0717 18:11:08.948609       1 shared_informer.go:262] Caches are synced for resource quota
	I0717 18:11:08.978623       1 shared_informer.go:262] Caches are synced for attach detach
	I0717 18:11:08.994929       1 shared_informer.go:262] Caches are synced for expand
	I0717 18:11:08.995979       1 shared_informer.go:262] Caches are synced for stateful set
	I0717 18:11:09.359146       1 shared_informer.go:262] Caches are synced for garbage collector
	I0717 18:11:09.389516       1 shared_informer.go:262] Caches are synced for garbage collector
	I0717 18:11:09.389526       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0717 18:11:09.548167       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0717 18:11:09.649413       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mpn52"
	I0717 18:11:09.748850       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-qt986"
	I0717 18:11:09.752611       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-f9g7j"
	
	
	==> kube-proxy [cd6505a7f35a] <==
	I0717 18:11:10.751860       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0717 18:11:10.751885       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0717 18:11:10.751894       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0717 18:11:10.760959       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0717 18:11:10.760970       1 server_others.go:206] "Using iptables Proxier"
	I0717 18:11:10.760981       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0717 18:11:10.761080       1 server.go:661] "Version info" version="v1.24.1"
	I0717 18:11:10.761084       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:11:10.761316       1 config.go:317] "Starting service config controller"
	I0717 18:11:10.761322       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0717 18:11:10.761333       1 config.go:226] "Starting endpoint slice config controller"
	I0717 18:11:10.761334       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0717 18:11:10.761656       1 config.go:444] "Starting node config controller"
	I0717 18:11:10.761658       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0717 18:11:10.861836       1 shared_informer.go:262] Caches are synced for node config
	I0717 18:11:10.861856       1 shared_informer.go:262] Caches are synced for service config
	I0717 18:11:10.861869       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1f063d50415d] <==
	W0717 18:10:53.521939       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 18:10:53.521948       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 18:10:53.522022       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 18:10:53.522031       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 18:10:53.522105       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:10:53.522112       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 18:10:53.522288       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 18:10:53.522295       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 18:10:53.522322       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 18:10:53.522331       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 18:10:53.522348       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 18:10:53.522356       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 18:10:53.522375       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 18:10:53.522379       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 18:10:53.522392       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:10:53.522397       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 18:10:53.522433       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 18:10:53.522449       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 18:10:54.351401       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 18:10:54.351515       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 18:10:54.380231       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:10:54.380282       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 18:10:54.488489       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 18:10:54.488678       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 18:10:57.619225       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-07-17 18:06:06 UTC, ends at Wed 2024-07-17 18:15:13 UTC. --
	Jul 17 18:10:58 running-upgrade-891000 kubelet[12334]: E0717 18:10:58.208404   12334 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-891000\" already exists" pod="kube-system/etcd-running-upgrade-891000"
	Jul 17 18:11:08 running-upgrade-891000 kubelet[12334]: I0717 18:11:08.702137   12334 topology_manager.go:200] "Topology Admit Handler"
	Jul 17 18:11:08 running-upgrade-891000 kubelet[12334]: I0717 18:11:08.742945   12334 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 17 18:11:08 running-upgrade-891000 kubelet[12334]: I0717 18:11:08.743429   12334 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 17 18:11:08 running-upgrade-891000 kubelet[12334]: I0717 18:11:08.843993   12334 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cdgx\" (UniqueName: \"kubernetes.io/projected/fcc79c1a-d93d-44f3-9e97-04e2282b2324-kube-api-access-8cdgx\") pod \"storage-provisioner\" (UID: \"fcc79c1a-d93d-44f3-9e97-04e2282b2324\") " pod="kube-system/storage-provisioner"
	Jul 17 18:11:08 running-upgrade-891000 kubelet[12334]: I0717 18:11:08.844045   12334 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fcc79c1a-d93d-44f3-9e97-04e2282b2324-tmp\") pod \"storage-provisioner\" (UID: \"fcc79c1a-d93d-44f3-9e97-04e2282b2324\") " pod="kube-system/storage-provisioner"
	Jul 17 18:11:08 running-upgrade-891000 kubelet[12334]: E0717 18:11:08.949258   12334 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 17 18:11:08 running-upgrade-891000 kubelet[12334]: E0717 18:11:08.949284   12334 projected.go:192] Error preparing data for projected volume kube-api-access-8cdgx for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 17 18:11:08 running-upgrade-891000 kubelet[12334]: E0717 18:11:08.949322   12334 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/fcc79c1a-d93d-44f3-9e97-04e2282b2324-kube-api-access-8cdgx podName:fcc79c1a-d93d-44f3-9e97-04e2282b2324 nodeName:}" failed. No retries permitted until 2024-07-17 18:11:09.449308676 +0000 UTC m=+13.480043653 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8cdgx" (UniqueName: "kubernetes.io/projected/fcc79c1a-d93d-44f3-9e97-04e2282b2324-kube-api-access-8cdgx") pod "storage-provisioner" (UID: "fcc79c1a-d93d-44f3-9e97-04e2282b2324") : configmap "kube-root-ca.crt" not found
	Jul 17 18:11:09 running-upgrade-891000 kubelet[12334]: E0717 18:11:09.549432   12334 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 17 18:11:09 running-upgrade-891000 kubelet[12334]: E0717 18:11:09.549455   12334 projected.go:192] Error preparing data for projected volume kube-api-access-8cdgx for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 17 18:11:09 running-upgrade-891000 kubelet[12334]: E0717 18:11:09.549486   12334 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/fcc79c1a-d93d-44f3-9e97-04e2282b2324-kube-api-access-8cdgx podName:fcc79c1a-d93d-44f3-9e97-04e2282b2324 nodeName:}" failed. No retries permitted until 2024-07-17 18:11:10.54947576 +0000 UTC m=+14.580210738 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8cdgx" (UniqueName: "kubernetes.io/projected/fcc79c1a-d93d-44f3-9e97-04e2282b2324-kube-api-access-8cdgx") pod "storage-provisioner" (UID: "fcc79c1a-d93d-44f3-9e97-04e2282b2324") : configmap "kube-root-ca.crt" not found
	Jul 17 18:11:09 running-upgrade-891000 kubelet[12334]: I0717 18:11:09.650927   12334 topology_manager.go:200] "Topology Admit Handler"
	Jul 17 18:11:09 running-upgrade-891000 kubelet[12334]: I0717 18:11:09.749829   12334 topology_manager.go:200] "Topology Admit Handler"
	Jul 17 18:11:09 running-upgrade-891000 kubelet[12334]: I0717 18:11:09.756439   12334 topology_manager.go:200] "Topology Admit Handler"
	Jul 17 18:11:09 running-upgrade-891000 kubelet[12334]: I0717 18:11:09.853496   12334 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t95nv\" (UniqueName: \"kubernetes.io/projected/36672b96-482f-4028-8cd9-312c9a1ccdce-kube-api-access-t95nv\") pod \"coredns-6d4b75cb6d-qt986\" (UID: \"36672b96-482f-4028-8cd9-312c9a1ccdce\") " pod="kube-system/coredns-6d4b75cb6d-qt986"
	Jul 17 18:11:09 running-upgrade-891000 kubelet[12334]: I0717 18:11:09.853527   12334 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b6438b1-1e62-4b69-b851-bc10f603ebd9-xtables-lock\") pod \"kube-proxy-mpn52\" (UID: \"1b6438b1-1e62-4b69-b851-bc10f603ebd9\") " pod="kube-system/kube-proxy-mpn52"
	Jul 17 18:11:09 running-upgrade-891000 kubelet[12334]: I0717 18:11:09.853543   12334 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36672b96-482f-4028-8cd9-312c9a1ccdce-config-volume\") pod \"coredns-6d4b75cb6d-qt986\" (UID: \"36672b96-482f-4028-8cd9-312c9a1ccdce\") " pod="kube-system/coredns-6d4b75cb6d-qt986"
	Jul 17 18:11:09 running-upgrade-891000 kubelet[12334]: I0717 18:11:09.853562   12334 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1b6438b1-1e62-4b69-b851-bc10f603ebd9-kube-proxy\") pod \"kube-proxy-mpn52\" (UID: \"1b6438b1-1e62-4b69-b851-bc10f603ebd9\") " pod="kube-system/kube-proxy-mpn52"
	Jul 17 18:11:09 running-upgrade-891000 kubelet[12334]: I0717 18:11:09.853574   12334 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s54t5\" (UniqueName: \"kubernetes.io/projected/1b6438b1-1e62-4b69-b851-bc10f603ebd9-kube-api-access-s54t5\") pod \"kube-proxy-mpn52\" (UID: \"1b6438b1-1e62-4b69-b851-bc10f603ebd9\") " pod="kube-system/kube-proxy-mpn52"
	Jul 17 18:11:09 running-upgrade-891000 kubelet[12334]: I0717 18:11:09.853595   12334 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b6438b1-1e62-4b69-b851-bc10f603ebd9-lib-modules\") pod \"kube-proxy-mpn52\" (UID: \"1b6438b1-1e62-4b69-b851-bc10f603ebd9\") " pod="kube-system/kube-proxy-mpn52"
	Jul 17 18:11:09 running-upgrade-891000 kubelet[12334]: I0717 18:11:09.954208   12334 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ddcd92c0-a7be-46df-8746-f3cb912ea292-config-volume\") pod \"coredns-6d4b75cb6d-f9g7j\" (UID: \"ddcd92c0-a7be-46df-8746-f3cb912ea292\") " pod="kube-system/coredns-6d4b75cb6d-f9g7j"
	Jul 17 18:11:09 running-upgrade-891000 kubelet[12334]: I0717 18:11:09.954267   12334 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqck6\" (UniqueName: \"kubernetes.io/projected/ddcd92c0-a7be-46df-8746-f3cb912ea292-kube-api-access-mqck6\") pod \"coredns-6d4b75cb6d-f9g7j\" (UID: \"ddcd92c0-a7be-46df-8746-f3cb912ea292\") " pod="kube-system/coredns-6d4b75cb6d-f9g7j"
	Jul 17 18:14:57 running-upgrade-891000 kubelet[12334]: I0717 18:14:57.501913   12334 scope.go:110] "RemoveContainer" containerID="cf751089cf65d602adfc1e8d7daa30a7f119bb5a5709a2d36e4778368903ac0c"
	Jul 17 18:14:57 running-upgrade-891000 kubelet[12334]: I0717 18:14:57.521693   12334 scope.go:110] "RemoveContainer" containerID="cb43bc7ced8504f803932511a2c47a864e1e40c5fadc8744f645e535c64a416b"
	
	
	==> storage-provisioner [926cd5eb774c] <==
	I0717 18:11:10.998866       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 18:11:11.004559       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 18:11:11.004975       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 18:11:11.013152       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 18:11:11.013555       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-891000_b7a7cf9f-d678-4e6d-b71f-58e52e4d1af5!
	I0717 18:11:11.015478       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"74aae0bd-3011-43af-97c9-1f437c011733", APIVersion:"v1", ResourceVersion:"367", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-891000_b7a7cf9f-d678-4e6d-b71f-58e52e4d1af5 became leader
	I0717 18:11:11.114062       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-891000_b7a7cf9f-d678-4e6d-b71f-58e52e4d1af5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-891000 -n running-upgrade-891000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-891000 -n running-upgrade-891000: exit status 2 (15.590445333s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-891000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-891000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-891000
helpers_test.go:178: (dbg) Non-zero exit: out/minikube-darwin-arm64 delete -p running-upgrade-891000: signal: killed (2m0.004218083s)
helpers_test.go:180: failed cleanup: signal: killed
--- FAIL: TestRunningBinaryUpgrade (708.68s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.93s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-067000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-067000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.793346708s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-067000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-067000" primary control-plane node in "kubernetes-upgrade-067000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-067000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:08:40.949140    8675 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:08:40.949289    8675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:08:40.949292    8675 out.go:304] Setting ErrFile to fd 2...
	I0717 11:08:40.949295    8675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:08:40.949427    8675 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:08:40.950636    8675 out.go:298] Setting JSON to false
	I0717 11:08:40.967153    8675 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5892,"bootTime":1721233828,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 11:08:40.967223    8675 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:08:40.974020    8675 out.go:177] * [kubernetes-upgrade-067000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:08:40.980950    8675 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 11:08:40.981060    8675 notify.go:220] Checking for updates...
	I0717 11:08:40.987988    8675 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:08:40.991036    8675 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:08:40.994011    8675 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:08:40.996985    8675 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 11:08:41.000008    8675 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:08:41.003275    8675 config.go:182] Loaded profile config "multinode-931000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:08:41.003343    8675 config.go:182] Loaded profile config "running-upgrade-891000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:08:41.003401    8675 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:08:41.006929    8675 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:08:41.013937    8675 start.go:297] selected driver: qemu2
	I0717 11:08:41.013944    8675 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:08:41.013949    8675 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:08:41.016127    8675 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:08:41.017292    8675 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:08:41.020049    8675 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 11:08:41.020062    8675 cni.go:84] Creating CNI manager for ""
	I0717 11:08:41.020069    8675 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0717 11:08:41.020095    8675 start.go:340] cluster config:
	{Name:kubernetes-upgrade-067000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-067000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:08:41.023713    8675 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:08:41.030919    8675 out.go:177] * Starting "kubernetes-upgrade-067000" primary control-plane node in "kubernetes-upgrade-067000" cluster
	I0717 11:08:41.035011    8675 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0717 11:08:41.035033    8675 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0717 11:08:41.035048    8675 cache.go:56] Caching tarball of preloaded images
	I0717 11:08:41.035106    8675 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:08:41.035112    8675 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0717 11:08:41.035187    8675 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/kubernetes-upgrade-067000/config.json ...
	I0717 11:08:41.035204    8675 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/kubernetes-upgrade-067000/config.json: {Name:mkefa1450f290a36c3f957bd8f2374be677bdaf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:08:41.035528    8675 start.go:360] acquireMachinesLock for kubernetes-upgrade-067000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:08:41.035557    8675 start.go:364] duration metric: took 24.166µs to acquireMachinesLock for "kubernetes-upgrade-067000"
	I0717 11:08:41.035567    8675 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-067000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-067000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:08:41.035587    8675 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:08:41.039956    8675 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 11:08:41.055540    8675 start.go:159] libmachine.API.Create for "kubernetes-upgrade-067000" (driver="qemu2")
	I0717 11:08:41.055569    8675 client.go:168] LocalClient.Create starting
	I0717 11:08:41.055630    8675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:08:41.055665    8675 main.go:141] libmachine: Decoding PEM data...
	I0717 11:08:41.055686    8675 main.go:141] libmachine: Parsing certificate...
	I0717 11:08:41.055729    8675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:08:41.055752    8675 main.go:141] libmachine: Decoding PEM data...
	I0717 11:08:41.055763    8675 main.go:141] libmachine: Parsing certificate...
	I0717 11:08:41.056166    8675 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:08:41.212801    8675 main.go:141] libmachine: Creating SSH key...
	I0717 11:08:41.261582    8675 main.go:141] libmachine: Creating Disk image...
	I0717 11:08:41.261587    8675 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:08:41.261771    8675 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2
	I0717 11:08:41.271179    8675 main.go:141] libmachine: STDOUT: 
	I0717 11:08:41.271194    8675 main.go:141] libmachine: STDERR: 
	I0717 11:08:41.271250    8675 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2 +20000M
	I0717 11:08:41.279497    8675 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:08:41.279510    8675 main.go:141] libmachine: STDERR: 
	I0717 11:08:41.279528    8675 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2
	I0717 11:08:41.279534    8675 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:08:41.279548    8675 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:08:41.279574    8675 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:3c:76:39:7e:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2
	I0717 11:08:41.281234    8675 main.go:141] libmachine: STDOUT: 
	I0717 11:08:41.281249    8675 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:08:41.281265    8675 client.go:171] duration metric: took 225.692417ms to LocalClient.Create
	I0717 11:08:43.283538    8675 start.go:128] duration metric: took 2.247917416s to createHost
	I0717 11:08:43.283614    8675 start.go:83] releasing machines lock for "kubernetes-upgrade-067000", held for 2.248044375s
	W0717 11:08:43.283693    8675 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:08:43.291101    8675 out.go:177] * Deleting "kubernetes-upgrade-067000" in qemu2 ...
	W0717 11:08:43.317267    8675 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:08:43.317300    8675 start.go:729] Will try again in 5 seconds ...
	I0717 11:08:48.319577    8675 start.go:360] acquireMachinesLock for kubernetes-upgrade-067000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:08:48.320146    8675 start.go:364] duration metric: took 450.084µs to acquireMachinesLock for "kubernetes-upgrade-067000"
	I0717 11:08:48.320296    8675 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-067000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-067000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:08:48.320525    8675 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:08:48.325427    8675 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 11:08:48.387922    8675 start.go:159] libmachine.API.Create for "kubernetes-upgrade-067000" (driver="qemu2")
	I0717 11:08:48.387974    8675 client.go:168] LocalClient.Create starting
	I0717 11:08:48.388107    8675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:08:48.388163    8675 main.go:141] libmachine: Decoding PEM data...
	I0717 11:08:48.388177    8675 main.go:141] libmachine: Parsing certificate...
	I0717 11:08:48.388232    8675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:08:48.388268    8675 main.go:141] libmachine: Decoding PEM data...
	I0717 11:08:48.388289    8675 main.go:141] libmachine: Parsing certificate...
	I0717 11:08:48.388723    8675 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:08:48.533457    8675 main.go:141] libmachine: Creating SSH key...
	I0717 11:08:48.659386    8675 main.go:141] libmachine: Creating Disk image...
	I0717 11:08:48.659393    8675 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:08:48.659579    8675 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2
	I0717 11:08:48.669454    8675 main.go:141] libmachine: STDOUT: 
	I0717 11:08:48.669473    8675 main.go:141] libmachine: STDERR: 
	I0717 11:08:48.669538    8675 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2 +20000M
	I0717 11:08:48.677560    8675 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:08:48.677575    8675 main.go:141] libmachine: STDERR: 
	I0717 11:08:48.677586    8675 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2
	I0717 11:08:48.677592    8675 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:08:48.677602    8675 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:08:48.677637    8675 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:c5:a0:3b:61:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2
	I0717 11:08:48.679868    8675 main.go:141] libmachine: STDOUT: 
	I0717 11:08:48.679890    8675 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:08:48.679913    8675 client.go:171] duration metric: took 291.9345ms to LocalClient.Create
	I0717 11:08:50.682005    8675 start.go:128] duration metric: took 2.361439084s to createHost
	I0717 11:08:50.682023    8675 start.go:83] releasing machines lock for "kubernetes-upgrade-067000", held for 2.361852125s
	W0717 11:08:50.682110    8675 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-067000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-067000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:08:50.691322    8675 out.go:177] 
	W0717 11:08:50.697328    8675 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:08:50.697336    8675 out.go:239] * 
	* 
	W0717 11:08:50.697862    8675 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:08:50.707351    8675 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-067000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-067000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-067000: (3.712216125s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-067000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-067000 status --format={{.Host}}: exit status 7 (66.733208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-067000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-067000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.187786666s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-067000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-067000" primary control-plane node in "kubernetes-upgrade-067000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-067000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-067000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:08:54.524800    8711 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:08:54.524944    8711 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:08:54.524948    8711 out.go:304] Setting ErrFile to fd 2...
	I0717 11:08:54.524950    8711 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:08:54.525081    8711 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:08:54.526088    8711 out.go:298] Setting JSON to false
	I0717 11:08:54.543504    8711 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5906,"bootTime":1721233828,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 11:08:54.543576    8711 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:08:54.548609    8711 out.go:177] * [kubernetes-upgrade-067000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:08:54.556581    8711 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 11:08:54.556642    8711 notify.go:220] Checking for updates...
	I0717 11:08:54.562589    8711 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:08:54.565511    8711 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:08:54.569571    8711 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:08:54.572561    8711 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 11:08:54.575593    8711 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:08:54.578785    8711 config.go:182] Loaded profile config "kubernetes-upgrade-067000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0717 11:08:54.579061    8711 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:08:54.583607    8711 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 11:08:54.590483    8711 start.go:297] selected driver: qemu2
	I0717 11:08:54.590488    8711 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-067000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-067000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:08:54.590528    8711 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:08:54.592975    8711 cni.go:84] Creating CNI manager for ""
	I0717 11:08:54.592991    8711 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:08:54.593021    8711 start.go:340] cluster config:
	{Name:kubernetes-upgrade-067000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-067000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:08:54.596575    8711 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:08:54.604541    8711 out.go:177] * Starting "kubernetes-upgrade-067000" primary control-plane node in "kubernetes-upgrade-067000" cluster
	I0717 11:08:54.608594    8711 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0717 11:08:54.608612    8711 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0717 11:08:54.608630    8711 cache.go:56] Caching tarball of preloaded images
	I0717 11:08:54.608699    8711 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:08:54.608704    8711 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0717 11:08:54.608764    8711 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/kubernetes-upgrade-067000/config.json ...
	I0717 11:08:54.609215    8711 start.go:360] acquireMachinesLock for kubernetes-upgrade-067000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:08:54.609244    8711 start.go:364] duration metric: took 22.542µs to acquireMachinesLock for "kubernetes-upgrade-067000"
	I0717 11:08:54.609252    8711 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:08:54.609257    8711 fix.go:54] fixHost starting: 
	I0717 11:08:54.609369    8711 fix.go:112] recreateIfNeeded on kubernetes-upgrade-067000: state=Stopped err=<nil>
	W0717 11:08:54.609377    8711 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:08:54.617558    8711 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-067000" ...
	I0717 11:08:54.621582    8711 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:08:54.621618    8711 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:c5:a0:3b:61:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2
	I0717 11:08:54.623605    8711 main.go:141] libmachine: STDOUT: 
	I0717 11:08:54.623626    8711 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:08:54.623656    8711 fix.go:56] duration metric: took 14.397375ms for fixHost
	I0717 11:08:54.623660    8711 start.go:83] releasing machines lock for "kubernetes-upgrade-067000", held for 14.411666ms
	W0717 11:08:54.623666    8711 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:08:54.623709    8711 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:08:54.623714    8711 start.go:729] Will try again in 5 seconds ...
	I0717 11:08:59.625964    8711 start.go:360] acquireMachinesLock for kubernetes-upgrade-067000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:08:59.626544    8711 start.go:364] duration metric: took 433.125µs to acquireMachinesLock for "kubernetes-upgrade-067000"
	I0717 11:08:59.626652    8711 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:08:59.626669    8711 fix.go:54] fixHost starting: 
	I0717 11:08:59.627595    8711 fix.go:112] recreateIfNeeded on kubernetes-upgrade-067000: state=Stopped err=<nil>
	W0717 11:08:59.627621    8711 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:08:59.634964    8711 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-067000" ...
	I0717 11:08:59.639036    8711 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:08:59.639342    8711 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:c5:a0:3b:61:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2
	I0717 11:08:59.649030    8711 main.go:141] libmachine: STDOUT: 
	I0717 11:08:59.649083    8711 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:08:59.649145    8711 fix.go:56] duration metric: took 22.476625ms for fixHost
	I0717 11:08:59.649181    8711 start.go:83] releasing machines lock for "kubernetes-upgrade-067000", held for 22.614083ms
	W0717 11:08:59.649363    8711 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-067000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-067000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:08:59.656949    8711 out.go:177] 
	W0717 11:08:59.660025    8711 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:08:59.660064    8711 out.go:239] * 
	* 
	W0717 11:08:59.661948    8711 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:08:59.670974    8711 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-067000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-067000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-067000 version --output=json: exit status 1 (61.058083ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-067000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-17 11:08:59.745784 -0700 PDT m=+922.390636584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-067000 -n kubernetes-upgrade-067000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-067000 -n kubernetes-upgrade-067000: exit status 7 (32.453083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-067000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-067000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-067000
--- FAIL: TestKubernetesUpgrade (18.93s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.31s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19282
- KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current760841762/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.31s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.17s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19282
- KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3526577303/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (577.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4207235321 start -p stopped-upgrade-058000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4207235321 start -p stopped-upgrade-058000 --memory=2200 --vm-driver=qemu2 : (53.247498375s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4207235321 -p stopped-upgrade-058000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4207235321 -p stopped-upgrade-058000 stop: (3.098522167s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-058000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-058000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.353673291s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-058000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-058000" primary control-plane node in "stopped-upgrade-058000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-058000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:09:57.195923    8746 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:09:57.196279    8746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:09:57.196286    8746 out.go:304] Setting ErrFile to fd 2...
	I0717 11:09:57.196392    8746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:09:57.196660    8746 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:09:57.198140    8746 out.go:298] Setting JSON to false
	I0717 11:09:57.218226    8746 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5969,"bootTime":1721233828,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 11:09:57.218305    8746 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:09:57.222699    8746 out.go:177] * [stopped-upgrade-058000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:09:57.230682    8746 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 11:09:57.230778    8746 notify.go:220] Checking for updates...
	I0717 11:09:57.237642    8746 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:09:57.239093    8746 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:09:57.242654    8746 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:09:57.245625    8746 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 11:09:57.248649    8746 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:09:57.251876    8746 config.go:182] Loaded profile config "stopped-upgrade-058000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:09:57.255634    8746 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 11:09:57.258635    8746 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:09:57.265605    8746 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 11:09:57.272675    8746 start.go:297] selected driver: qemu2
	I0717 11:09:57.272681    8746 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-058000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51504 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-058000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0717 11:09:57.272742    8746 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:09:57.275373    8746 cni.go:84] Creating CNI manager for ""
	I0717 11:09:57.275438    8746 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:09:57.275464    8746 start.go:340] cluster config:
	{Name:stopped-upgrade-058000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51504 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-058000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0717 11:09:57.275521    8746 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:09:57.282617    8746 out.go:177] * Starting "stopped-upgrade-058000" primary control-plane node in "stopped-upgrade-058000" cluster
	I0717 11:09:57.285624    8746 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0717 11:09:57.285642    8746 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0717 11:09:57.285666    8746 cache.go:56] Caching tarball of preloaded images
	I0717 11:09:57.285746    8746 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:09:57.285752    8746 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0717 11:09:57.285808    8746 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/config.json ...
	I0717 11:09:57.286302    8746 start.go:360] acquireMachinesLock for stopped-upgrade-058000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:09:57.286338    8746 start.go:364] duration metric: took 29.584µs to acquireMachinesLock for "stopped-upgrade-058000"
	I0717 11:09:57.286346    8746 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:09:57.286351    8746 fix.go:54] fixHost starting: 
	I0717 11:09:57.286467    8746 fix.go:112] recreateIfNeeded on stopped-upgrade-058000: state=Stopped err=<nil>
	W0717 11:09:57.286475    8746 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:09:57.290654    8746 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-058000" ...
	I0717 11:09:57.297637    8746 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:09:57.297710    8746 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/stopped-upgrade-058000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/stopped-upgrade-058000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/stopped-upgrade-058000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51470-:22,hostfwd=tcp::51471-:2376,hostname=stopped-upgrade-058000 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/stopped-upgrade-058000/disk.qcow2
	I0717 11:09:57.343742    8746 main.go:141] libmachine: STDOUT: 
	I0717 11:09:57.343780    8746 main.go:141] libmachine: STDERR: 
	I0717 11:09:57.343786    8746 main.go:141] libmachine: Waiting for VM to start (ssh -p 51470 docker@127.0.0.1)...
	I0717 11:10:17.726392    8746 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/config.json ...
	I0717 11:10:17.727131    8746 machine.go:94] provisionDockerMachine start ...
	I0717 11:10:17.727348    8746 main.go:141] libmachine: Using SSH client type: native
	I0717 11:10:17.727753    8746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ed29b0] 0x104ed5210 <nil>  [] 0s} localhost 51470 <nil> <nil>}
	I0717 11:10:17.727768    8746 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 11:10:17.804170    8746 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 11:10:17.804191    8746 buildroot.go:166] provisioning hostname "stopped-upgrade-058000"
	I0717 11:10:17.804252    8746 main.go:141] libmachine: Using SSH client type: native
	I0717 11:10:17.804393    8746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ed29b0] 0x104ed5210 <nil>  [] 0s} localhost 51470 <nil> <nil>}
	I0717 11:10:17.804400    8746 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-058000 && echo "stopped-upgrade-058000" | sudo tee /etc/hostname
	I0717 11:10:17.868152    8746 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-058000
	
	I0717 11:10:17.868210    8746 main.go:141] libmachine: Using SSH client type: native
	I0717 11:10:17.868336    8746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ed29b0] 0x104ed5210 <nil>  [] 0s} localhost 51470 <nil> <nil>}
	I0717 11:10:17.868343    8746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-058000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-058000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-058000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 11:10:17.929096    8746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 11:10:17.929111    8746 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19282-6331/.minikube CaCertPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19282-6331/.minikube}
	I0717 11:10:17.929126    8746 buildroot.go:174] setting up certificates
	I0717 11:10:17.929131    8746 provision.go:84] configureAuth start
	I0717 11:10:17.929135    8746 provision.go:143] copyHostCerts
	I0717 11:10:17.929201    8746 exec_runner.go:144] found /Users/jenkins/minikube-integration/19282-6331/.minikube/ca.pem, removing ...
	I0717 11:10:17.929208    8746 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19282-6331/.minikube/ca.pem
	I0717 11:10:17.929309    8746 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19282-6331/.minikube/ca.pem (1078 bytes)
	I0717 11:10:17.929493    8746 exec_runner.go:144] found /Users/jenkins/minikube-integration/19282-6331/.minikube/cert.pem, removing ...
	I0717 11:10:17.929497    8746 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19282-6331/.minikube/cert.pem
	I0717 11:10:17.929538    8746 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19282-6331/.minikube/cert.pem (1123 bytes)
	I0717 11:10:17.929645    8746 exec_runner.go:144] found /Users/jenkins/minikube-integration/19282-6331/.minikube/key.pem, removing ...
	I0717 11:10:17.929648    8746 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19282-6331/.minikube/key.pem
	I0717 11:10:17.929688    8746 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19282-6331/.minikube/key.pem (1679 bytes)
	I0717 11:10:17.929780    8746 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-058000 san=[127.0.0.1 localhost minikube stopped-upgrade-058000]
	I0717 11:10:17.973148    8746 provision.go:177] copyRemoteCerts
	I0717 11:10:17.973174    8746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 11:10:17.973180    8746 sshutil.go:53] new ssh client: &{IP:localhost Port:51470 SSHKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/stopped-upgrade-058000/id_rsa Username:docker}
	I0717 11:10:18.005649    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 11:10:18.012435    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 11:10:18.019098    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 11:10:18.026758    8746 provision.go:87] duration metric: took 97.616875ms to configureAuth
	I0717 11:10:18.026768    8746 buildroot.go:189] setting minikube options for container-runtime
	I0717 11:10:18.026884    8746 config.go:182] Loaded profile config "stopped-upgrade-058000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:10:18.026915    8746 main.go:141] libmachine: Using SSH client type: native
	I0717 11:10:18.026997    8746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ed29b0] 0x104ed5210 <nil>  [] 0s} localhost 51470 <nil> <nil>}
	I0717 11:10:18.027002    8746 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 11:10:18.091936    8746 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 11:10:18.091948    8746 buildroot.go:70] root file system type: tmpfs
	I0717 11:10:18.092004    8746 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 11:10:18.092054    8746 main.go:141] libmachine: Using SSH client type: native
	I0717 11:10:18.092178    8746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ed29b0] 0x104ed5210 <nil>  [] 0s} localhost 51470 <nil> <nil>}
	I0717 11:10:18.092215    8746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 11:10:18.156794    8746 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 11:10:18.156856    8746 main.go:141] libmachine: Using SSH client type: native
	I0717 11:10:18.156989    8746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ed29b0] 0x104ed5210 <nil>  [] 0s} localhost 51470 <nil> <nil>}
	I0717 11:10:18.156997    8746 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 11:10:18.501310    8746 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 11:10:18.501324    8746 machine.go:97] duration metric: took 774.1785ms to provisionDockerMachine
	I0717 11:10:18.501330    8746 start.go:293] postStartSetup for "stopped-upgrade-058000" (driver="qemu2")
	I0717 11:10:18.501336    8746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 11:10:18.501402    8746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 11:10:18.501410    8746 sshutil.go:53] new ssh client: &{IP:localhost Port:51470 SSHKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/stopped-upgrade-058000/id_rsa Username:docker}
	I0717 11:10:18.532275    8746 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 11:10:18.533555    8746 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 11:10:18.533562    8746 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19282-6331/.minikube/addons for local assets ...
	I0717 11:10:18.533638    8746 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19282-6331/.minikube/files for local assets ...
	I0717 11:10:18.533730    8746 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19282-6331/.minikube/files/etc/ssl/certs/68202.pem -> 68202.pem in /etc/ssl/certs
	I0717 11:10:18.533823    8746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 11:10:18.536230    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/files/etc/ssl/certs/68202.pem --> /etc/ssl/certs/68202.pem (1708 bytes)
	I0717 11:10:18.543019    8746 start.go:296] duration metric: took 41.684458ms for postStartSetup
	I0717 11:10:18.543036    8746 fix.go:56] duration metric: took 21.256652583s for fixHost
	I0717 11:10:18.543065    8746 main.go:141] libmachine: Using SSH client type: native
	I0717 11:10:18.543170    8746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ed29b0] 0x104ed5210 <nil>  [] 0s} localhost 51470 <nil> <nil>}
	I0717 11:10:18.543179    8746 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 11:10:18.603458    8746 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721239818.457815796
	
	I0717 11:10:18.603479    8746 fix.go:216] guest clock: 1721239818.457815796
	I0717 11:10:18.603483    8746 fix.go:229] Guest: 2024-07-17 11:10:18.457815796 -0700 PDT Remote: 2024-07-17 11:10:18.543038 -0700 PDT m=+21.377990501 (delta=-85.222204ms)
	I0717 11:10:18.603494    8746 fix.go:200] guest clock delta is within tolerance: -85.222204ms
	I0717 11:10:18.603498    8746 start.go:83] releasing machines lock for "stopped-upgrade-058000", held for 21.317122541s
	I0717 11:10:18.603562    8746 ssh_runner.go:195] Run: cat /version.json
	I0717 11:10:18.603565    8746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 11:10:18.603570    8746 sshutil.go:53] new ssh client: &{IP:localhost Port:51470 SSHKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/stopped-upgrade-058000/id_rsa Username:docker}
	I0717 11:10:18.603580    8746 sshutil.go:53] new ssh client: &{IP:localhost Port:51470 SSHKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/stopped-upgrade-058000/id_rsa Username:docker}
	W0717 11:10:18.604156    8746 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51470: connect: connection refused
	I0717 11:10:18.604178    8746 retry.go:31] will retry after 352.608115ms: dial tcp [::1]:51470: connect: connection refused
	W0717 11:10:19.003177    8746 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0717 11:10:19.003304    8746 ssh_runner.go:195] Run: systemctl --version
	I0717 11:10:19.007093    8746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 11:10:19.010408    8746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 11:10:19.010458    8746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0717 11:10:19.015406    8746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0717 11:10:19.023020    8746 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 11:10:19.023033    8746 start.go:495] detecting cgroup driver to use...
	I0717 11:10:19.023151    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 11:10:19.032969    8746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0717 11:10:19.036797    8746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 11:10:19.040242    8746 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 11:10:19.040273    8746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 11:10:19.043805    8746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 11:10:19.047255    8746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 11:10:19.050643    8746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 11:10:19.054078    8746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 11:10:19.057043    8746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 11:10:19.059716    8746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 11:10:19.063006    8746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 11:10:19.066396    8746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 11:10:19.069077    8746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 11:10:19.071622    8746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:10:19.155966    8746 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 11:10:19.162638    8746 start.go:495] detecting cgroup driver to use...
	I0717 11:10:19.162719    8746 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 11:10:19.167866    8746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 11:10:19.172868    8746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 11:10:19.180975    8746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 11:10:19.185500    8746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 11:10:19.190122    8746 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 11:10:19.235781    8746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 11:10:19.240527    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 11:10:19.245931    8746 ssh_runner.go:195] Run: which cri-dockerd
	I0717 11:10:19.247094    8746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 11:10:19.250042    8746 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 11:10:19.254887    8746 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 11:10:19.341274    8746 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 11:10:19.426886    8746 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 11:10:19.426957    8746 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 11:10:19.432411    8746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:10:19.516886    8746 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 11:10:20.673719    8746 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.156807333s)
	I0717 11:10:20.673782    8746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 11:10:20.683637    8746 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0717 11:10:20.690230    8746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 11:10:20.694584    8746 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 11:10:20.774594    8746 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 11:10:20.846293    8746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:10:20.923101    8746 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 11:10:20.929228    8746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 11:10:20.933456    8746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:10:21.015513    8746 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 11:10:21.053526    8746 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 11:10:21.053597    8746 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 11:10:21.056682    8746 start.go:563] Will wait 60s for crictl version
	I0717 11:10:21.056740    8746 ssh_runner.go:195] Run: which crictl
	I0717 11:10:21.058293    8746 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 11:10:21.072847    8746 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0717 11:10:21.072914    8746 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 11:10:21.088710    8746 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 11:10:21.113162    8746 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0717 11:10:21.113275    8746 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0717 11:10:21.114524    8746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 11:10:21.118493    8746 kubeadm.go:883] updating cluster {Name:stopped-upgrade-058000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51504 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-058000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0717 11:10:21.118542    8746 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0717 11:10:21.118582    8746 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 11:10:21.129047    8746 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 11:10:21.129055    8746 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0717 11:10:21.129099    8746 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 11:10:21.132006    8746 ssh_runner.go:195] Run: which lz4
	I0717 11:10:21.133418    8746 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 11:10:21.134648    8746 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 11:10:21.134657    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0717 11:10:22.095290    8746 docker.go:649] duration metric: took 961.899041ms to copy over tarball
	I0717 11:10:22.095348    8746 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 11:10:23.263978    8746 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.1685995s)
	I0717 11:10:23.264000    8746 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 11:10:23.279865    8746 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 11:10:23.283099    8746 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0717 11:10:23.288266    8746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:10:23.368546    8746 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 11:10:24.992105    8746 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.623531917s)
	I0717 11:10:24.992232    8746 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 11:10:25.007117    8746 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 11:10:25.007126    8746 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0717 11:10:25.007131    8746 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 11:10:25.011238    8746 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:10:25.013050    8746 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:10:25.014951    8746 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:10:25.015038    8746 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:10:25.017513    8746 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:10:25.017570    8746 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0717 11:10:25.019573    8746 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:10:25.019795    8746 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:10:25.020977    8746 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0717 11:10:25.021337    8746 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0717 11:10:25.022483    8746 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:10:25.022594    8746 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:10:25.023974    8746 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:10:25.024068    8746 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0717 11:10:25.025203    8746 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:10:25.025836    8746 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:10:25.409243    8746 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:10:25.411541    8746 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:10:25.424703    8746 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0717 11:10:25.424728    8746 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:10:25.424783    8746 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:10:25.429548    8746 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0717 11:10:25.429569    8746 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:10:25.429618    8746 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:10:25.438683    8746 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0717 11:10:25.443783    8746 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0717 11:10:25.447705    8746 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:10:25.455313    8746 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0717 11:10:25.458223    8746 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0717 11:10:25.458240    8746 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:10:25.458275    8746 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:10:25.465667    8746 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0717 11:10:25.468469    8746 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0717 11:10:25.468489    8746 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0717 11:10:25.468517    8746 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0717 11:10:25.468526    8746 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	W0717 11:10:25.474716    8746 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0717 11:10:25.474851    8746 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:10:25.478745    8746 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0717 11:10:25.478768    8746 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0717 11:10:25.478812    8746 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0717 11:10:25.483830    8746 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0717 11:10:25.483955    8746 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0717 11:10:25.498821    8746 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0717 11:10:25.498835    8746 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0717 11:10:25.498841    8746 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:10:25.498875    8746 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0717 11:10:25.498884    8746 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:10:25.498887    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0717 11:10:25.498935    8746 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0717 11:10:25.512136    8746 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:10:25.533096    8746 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0717 11:10:25.533118    8746 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0717 11:10:25.533127    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0717 11:10:25.533222    8746 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0717 11:10:25.549742    8746 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0717 11:10:25.549761    8746 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:10:25.549811    8746 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:10:25.557344    8746 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0717 11:10:25.557373    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0717 11:10:25.571098    8746 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0717 11:10:25.571119    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0717 11:10:25.585888    8746 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0717 11:10:25.645957    8746 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0717 11:10:25.664372    8746 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0717 11:10:25.664387    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0717 11:10:25.667141    8746 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0717 11:10:25.667244    8746 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:10:25.771495    8746 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0717 11:10:25.771566    8746 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0717 11:10:25.771591    8746 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:10:25.771652    8746 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:10:25.808960    8746 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 11:10:25.809080    8746 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0717 11:10:25.819841    8746 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0717 11:10:25.819873    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0717 11:10:25.835171    8746 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0717 11:10:25.835184    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0717 11:10:25.982142    8746 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0717 11:10:25.982164    8746 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 11:10:25.982170    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0717 11:10:26.214202    8746 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 11:10:26.214244    8746 cache_images.go:92] duration metric: took 1.20710525s to LoadCachedImages
	W0717 11:10:26.214283    8746 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0717 11:10:26.214292    8746 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0717 11:10:26.214339    8746 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-058000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-058000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 11:10:26.214398    8746 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 11:10:26.227839    8746 cni.go:84] Creating CNI manager for ""
	I0717 11:10:26.227852    8746 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:10:26.227857    8746 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 11:10:26.227866    8746 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-058000 NodeName:stopped-upgrade-058000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 11:10:26.227935    8746 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-058000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 11:10:26.227987    8746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0717 11:10:26.231387    8746 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 11:10:26.231416    8746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 11:10:26.234628    8746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0717 11:10:26.239655    8746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 11:10:26.244923    8746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0717 11:10:26.250182    8746 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0717 11:10:26.251534    8746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 11:10:26.255302    8746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:10:26.339041    8746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 11:10:26.350010    8746 certs.go:68] Setting up /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000 for IP: 10.0.2.15
	I0717 11:10:26.350018    8746 certs.go:194] generating shared ca certs ...
	I0717 11:10:26.350027    8746 certs.go:226] acquiring lock for ca certs: {Name:mkc544d9d9a3de35c1f6cee821ec7cd5d08f6f6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:10:26.350202    8746 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/ca.key
	I0717 11:10:26.350261    8746 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/proxy-client-ca.key
	I0717 11:10:26.350269    8746 certs.go:256] generating profile certs ...
	I0717 11:10:26.350343    8746 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/client.key
	I0717 11:10:26.350361    8746 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/apiserver.key.8922329e
	I0717 11:10:26.350372    8746 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/apiserver.crt.8922329e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0717 11:10:26.401776    8746 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/apiserver.crt.8922329e ...
	I0717 11:10:26.401790    8746 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/apiserver.crt.8922329e: {Name:mk82b84f3bd3e95cf746ad95dd6bad65dcc92ffc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:10:26.402931    8746 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/apiserver.key.8922329e ...
	I0717 11:10:26.402938    8746 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/apiserver.key.8922329e: {Name:mkbee49545955be66796292d3778fb9483e5628e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:10:26.403104    8746 certs.go:381] copying /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/apiserver.crt.8922329e -> /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/apiserver.crt
	I0717 11:10:26.403247    8746 certs.go:385] copying /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/apiserver.key.8922329e -> /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/apiserver.key
	I0717 11:10:26.403405    8746 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/proxy-client.key
	I0717 11:10:26.403538    8746 certs.go:484] found cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/6820.pem (1338 bytes)
	W0717 11:10:26.403567    8746 certs.go:480] ignoring /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/6820_empty.pem, impossibly tiny 0 bytes
	I0717 11:10:26.403574    8746 certs.go:484] found cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 11:10:26.403601    8746 certs.go:484] found cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem (1078 bytes)
	I0717 11:10:26.403626    8746 certs.go:484] found cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem (1123 bytes)
	I0717 11:10:26.403650    8746 certs.go:484] found cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/key.pem (1679 bytes)
	I0717 11:10:26.403907    8746 certs.go:484] found cert: /Users/jenkins/minikube-integration/19282-6331/.minikube/files/etc/ssl/certs/68202.pem (1708 bytes)
	I0717 11:10:26.404359    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 11:10:26.411216    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 11:10:26.418239    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 11:10:26.425135    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 11:10:26.431818    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 11:10:26.439131    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 11:10:26.446883    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 11:10:26.454605    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 11:10:26.462251    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/files/etc/ssl/certs/68202.pem --> /usr/share/ca-certificates/68202.pem (1708 bytes)
	I0717 11:10:26.469435    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 11:10:26.475915    8746 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/6820.pem --> /usr/share/ca-certificates/6820.pem (1338 bytes)
	I0717 11:10:26.482775    8746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 11:10:26.487985    8746 ssh_runner.go:195] Run: openssl version
	I0717 11:10:26.489845    8746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/68202.pem && ln -fs /usr/share/ca-certificates/68202.pem /etc/ssl/certs/68202.pem"
	I0717 11:10:26.492679    8746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/68202.pem
	I0717 11:10:26.493961    8746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:54 /usr/share/ca-certificates/68202.pem
	I0717 11:10:26.493986    8746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/68202.pem
	I0717 11:10:26.495754    8746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/68202.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 11:10:26.499133    8746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 11:10:26.502253    8746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 11:10:26.503622    8746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0717 11:10:26.503638    8746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 11:10:26.505297    8746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 11:10:26.508057    8746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6820.pem && ln -fs /usr/share/ca-certificates/6820.pem /etc/ssl/certs/6820.pem"
	I0717 11:10:26.511212    8746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6820.pem
	I0717 11:10:26.512737    8746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:54 /usr/share/ca-certificates/6820.pem
	I0717 11:10:26.512754    8746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6820.pem
	I0717 11:10:26.514473    8746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6820.pem /etc/ssl/certs/51391683.0"
	I0717 11:10:26.517478    8746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 11:10:26.519018    8746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 11:10:26.521294    8746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 11:10:26.523040    8746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 11:10:26.525086    8746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 11:10:26.527109    8746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 11:10:26.529052    8746 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 11:10:26.530795    8746 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-058000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51504 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-058000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0717 11:10:26.530862    8746 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 11:10:26.540567    8746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 11:10:26.543641    8746 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 11:10:26.543649    8746 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 11:10:26.543674    8746 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 11:10:26.547163    8746 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 11:10:26.547492    8746 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-058000" does not appear in /Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:10:26.547581    8746 kubeconfig.go:62] /Users/jenkins/minikube-integration/19282-6331/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-058000" cluster setting kubeconfig missing "stopped-upgrade-058000" context setting]
	I0717 11:10:26.547773    8746 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/kubeconfig: {Name:mk593058234481727c8f9c6b6ce8d5b26e4d4302 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:10:26.548209    8746 kapi.go:59] client config for stopped-upgrade-058000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/client.key", CAFile:"/Users/jenkins/minikube-integration/19282-6331/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106267730), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 11:10:26.548540    8746 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 11:10:26.551239    8746 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-058000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0717 11:10:26.551247    8746 kubeadm.go:1160] stopping kube-system containers ...
	I0717 11:10:26.551288    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 11:10:26.562543    8746 docker.go:483] Stopping containers: [e372bb421024 5ac69b9301b1 05d92b386885 45e9faca056f 4d18bd71336b 4229c14fdcfb f73468515120 5778510fae0a 6d85b1985a2d]
	I0717 11:10:26.562612    8746 ssh_runner.go:195] Run: docker stop e372bb421024 5ac69b9301b1 05d92b386885 45e9faca056f 4d18bd71336b 4229c14fdcfb f73468515120 5778510fae0a 6d85b1985a2d
	I0717 11:10:26.572942    8746 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 11:10:26.578405    8746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 11:10:26.581293    8746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 11:10:26.581299    8746 kubeadm.go:157] found existing configuration files:
	
	I0717 11:10:26.581322    8746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/admin.conf
	I0717 11:10:26.583779    8746 kubeadm.go:163] "https://control-plane.minikube.internal:51504" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 11:10:26.583802    8746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 11:10:26.586772    8746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/kubelet.conf
	I0717 11:10:26.589736    8746 kubeadm.go:163] "https://control-plane.minikube.internal:51504" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 11:10:26.589766    8746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 11:10:26.592247    8746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/controller-manager.conf
	I0717 11:10:26.595043    8746 kubeadm.go:163] "https://control-plane.minikube.internal:51504" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 11:10:26.595064    8746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 11:10:26.598054    8746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/scheduler.conf
	I0717 11:10:26.600639    8746 kubeadm.go:163] "https://control-plane.minikube.internal:51504" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 11:10:26.600662    8746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 11:10:26.603249    8746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 11:10:26.606414    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:10:26.628676    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:10:26.966256    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:10:27.104522    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:10:27.127105    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:10:27.150870    8746 api_server.go:52] waiting for apiserver process to appear ...
	I0717 11:10:27.150940    8746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:10:27.653205    8746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:10:28.152910    8746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:10:28.157168    8746 api_server.go:72] duration metric: took 1.006297875s to wait for apiserver process to appear ...
	I0717 11:10:28.157179    8746 api_server.go:88] waiting for apiserver healthz status ...
	I0717 11:10:28.157188    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:33.158512    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:33.158533    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:38.158797    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:38.158827    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:43.159294    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:43.159334    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:48.159973    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:48.160047    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:53.160520    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:53.160557    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:58.161216    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:58.161237    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:03.161962    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:03.162009    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:08.163017    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:08.163060    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:13.164256    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:13.164293    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:18.164885    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:18.164933    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:23.166735    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:23.166773    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:28.167249    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:28.167476    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:11:28.187489    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:11:28.187585    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:11:28.202495    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:11:28.202578    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:11:28.214231    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:11:28.214335    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:11:28.226654    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:11:28.226725    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:11:28.240308    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:11:28.240373    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:11:28.250694    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:11:28.250772    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:11:28.260790    8746 logs.go:276] 0 containers: []
	W0717 11:11:28.260803    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:11:28.260864    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:11:28.271383    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:11:28.271401    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:11:28.271406    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:11:28.283782    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:11:28.283796    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:11:28.288529    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:11:28.288537    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:11:28.300306    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:11:28.300318    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:11:28.319849    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:11:28.319859    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:11:28.345791    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:11:28.345803    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:11:28.361037    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:11:28.361050    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:11:28.377889    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:11:28.377899    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:11:28.417218    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:11:28.417232    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:11:28.431608    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:11:28.431621    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:11:28.457402    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:11:28.457416    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:11:28.472287    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:11:28.472299    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:11:28.490165    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:11:28.490175    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:11:28.501274    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:11:28.501285    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:11:28.609923    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:11:28.609938    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:11:28.624341    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:11:28.624352    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:11:31.142711    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:36.144971    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:36.145127    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:11:36.157184    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:11:36.157263    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:11:36.168390    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:11:36.168469    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:11:36.179647    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:11:36.179716    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:11:36.194113    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:11:36.194180    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:11:36.206084    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:11:36.206155    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:11:36.216950    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:11:36.217017    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:11:36.227603    8746 logs.go:276] 0 containers: []
	W0717 11:11:36.227614    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:11:36.227672    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:11:36.238154    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:11:36.238171    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:11:36.238177    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:11:36.252983    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:11:36.252994    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:11:36.282175    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:11:36.282186    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:11:36.297206    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:11:36.297217    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:11:36.309184    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:11:36.309194    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:11:36.320654    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:11:36.320665    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:11:36.346335    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:11:36.346342    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:11:36.385179    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:11:36.385186    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:11:36.403661    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:11:36.403672    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:11:36.408437    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:11:36.408445    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:11:36.422247    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:11:36.422257    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:11:36.433437    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:11:36.433447    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:11:36.456068    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:11:36.456078    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:11:36.474150    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:11:36.474162    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:11:36.513300    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:11:36.513311    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:11:36.525126    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:11:36.525135    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:11:39.038773    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:44.041188    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:44.041467    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:11:44.071421    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:11:44.071564    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:11:44.091456    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:11:44.091537    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:11:44.104709    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:11:44.104778    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:11:44.115540    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:11:44.115610    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:11:44.126175    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:11:44.126245    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:11:44.140523    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:11:44.140598    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:11:44.154533    8746 logs.go:276] 0 containers: []
	W0717 11:11:44.154545    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:11:44.154607    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:11:44.164977    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:11:44.164995    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:11:44.165001    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:11:44.179343    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:11:44.179358    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:11:44.197705    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:11:44.197719    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:11:44.209512    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:11:44.209522    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:11:44.223901    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:11:44.223915    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:11:44.235318    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:11:44.235332    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:11:44.247236    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:11:44.247247    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:11:44.262076    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:11:44.262093    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:11:44.278986    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:11:44.278998    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:11:44.303269    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:11:44.303286    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:11:44.316854    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:11:44.316866    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:11:44.354195    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:11:44.354209    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:11:44.358357    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:11:44.358363    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:11:44.395303    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:11:44.395313    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:11:44.420824    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:11:44.420835    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:11:44.432506    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:11:44.432517    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:11:46.959627    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:51.962065    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:51.962232    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:11:51.978842    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:11:51.978934    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:11:51.993156    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:11:51.993229    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:11:52.004292    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:11:52.004358    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:11:52.017347    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:11:52.017418    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:11:52.027977    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:11:52.028045    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:11:52.040723    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:11:52.040800    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:11:52.051341    8746 logs.go:276] 0 containers: []
	W0717 11:11:52.051353    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:11:52.051418    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:11:52.062484    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:11:52.062500    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:11:52.062507    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:11:52.080179    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:11:52.080194    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:11:52.092165    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:11:52.092178    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:11:52.131210    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:11:52.131220    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:11:52.145283    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:11:52.145295    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:11:52.159510    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:11:52.159520    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:11:52.185883    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:11:52.185896    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:11:52.203434    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:11:52.203444    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:11:52.216167    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:11:52.216179    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:11:52.241903    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:11:52.241914    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:11:52.254172    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:11:52.254189    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:11:52.258576    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:11:52.258582    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:11:52.294538    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:11:52.294551    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:11:52.305763    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:11:52.305774    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:11:52.324024    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:11:52.324036    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:11:52.338732    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:11:52.338746    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:11:54.853336    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:59.856055    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:59.856242    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:11:59.878094    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:11:59.878202    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:11:59.893430    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:11:59.893503    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:11:59.905735    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:11:59.905808    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:11:59.915955    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:11:59.916028    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:11:59.926590    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:11:59.926661    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:11:59.940190    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:11:59.940259    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:11:59.955625    8746 logs.go:276] 0 containers: []
	W0717 11:11:59.955636    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:11:59.955692    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:11:59.966514    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:11:59.966530    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:11:59.966537    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:11:59.971211    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:11:59.971217    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:00.006955    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:12:00.006967    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:12:00.024780    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:12:00.024792    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:12:00.036448    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:00.036460    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:00.075540    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:12:00.075551    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:12:00.093717    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:12:00.093728    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:00.107173    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:12:00.107185    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:12:00.121814    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:12:00.121824    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:12:00.146366    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:12:00.146378    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:12:00.158392    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:12:00.158406    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:12:00.170282    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:00.170293    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:00.195786    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:12:00.195795    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:12:00.210001    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:12:00.210014    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:12:00.224308    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:12:00.224322    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:12:00.236063    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:12:00.236074    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:12:02.750465    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:07.753048    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:07.753283    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:07.783817    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:12:07.783942    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:07.801888    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:12:07.801985    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:07.815998    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:12:07.816075    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:07.827123    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:12:07.827195    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:07.837483    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:12:07.837552    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:07.847900    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:12:07.847966    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:07.859228    8746 logs.go:276] 0 containers: []
	W0717 11:12:07.859240    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:07.859303    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:07.876530    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:12:07.876547    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:07.876552    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:07.901748    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:07.901759    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:07.940449    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:12:07.940458    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:12:07.952529    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:12:07.952540    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:12:07.964252    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:12:07.964262    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:07.975712    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:07.975722    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:07.979772    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:07.979779    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:08.017349    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:12:08.017362    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:12:08.029413    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:12:08.029424    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:12:08.040718    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:12:08.040730    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:12:08.057798    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:12:08.057810    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:12:08.071822    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:12:08.071834    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:12:08.096906    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:12:08.096916    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:12:08.111663    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:12:08.111675    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:12:08.126187    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:12:08.126198    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:12:08.147443    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:12:08.147453    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:12:10.660134    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:15.660753    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:15.660956    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:15.684832    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:12:15.684934    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:15.700140    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:12:15.700238    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:15.718464    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:12:15.718531    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:15.728956    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:12:15.729023    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:15.739419    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:12:15.739490    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:15.758801    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:12:15.758868    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:15.768944    8746 logs.go:276] 0 containers: []
	W0717 11:12:15.768956    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:15.769016    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:15.779497    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:12:15.779515    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:12:15.779520    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:12:15.790758    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:12:15.790769    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:12:15.806409    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:12:15.806419    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:12:15.818274    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:12:15.818285    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:12:15.835732    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:15.835742    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:15.861043    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:12:15.861052    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:15.872956    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:12:15.872967    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:12:15.897490    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:15.897504    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:15.934133    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:12:15.934145    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:12:15.946863    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:15.946874    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:15.951262    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:12:15.951268    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:12:15.965343    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:12:15.965353    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:12:15.979228    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:12:15.979242    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:12:15.997781    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:12:15.997795    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:12:16.010132    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:12:16.010145    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:12:16.028098    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:16.028109    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:18.568126    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:23.570741    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:23.570859    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:23.583042    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:12:23.583106    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:23.594686    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:12:23.594772    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:23.605993    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:12:23.606061    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:23.617078    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:12:23.617154    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:23.628039    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:12:23.628104    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:23.638767    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:12:23.638829    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:23.648811    8746 logs.go:276] 0 containers: []
	W0717 11:12:23.648822    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:23.648880    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:23.659312    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:12:23.659333    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:23.659340    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:23.663691    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:12:23.663698    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:12:23.678641    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:12:23.678650    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:12:23.693657    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:23.693670    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:23.718081    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:12:23.718091    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:12:23.732214    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:12:23.732229    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:12:23.743254    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:12:23.743266    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:12:23.761328    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:12:23.761341    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:12:23.773317    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:12:23.773332    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:12:23.785708    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:23.785718    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:23.824611    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:12:23.824623    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:12:23.847618    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:12:23.847630    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:12:23.860152    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:23.860162    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:23.898836    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:12:23.898844    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:12:23.923162    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:12:23.923174    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:12:23.940634    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:12:23.940646    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:26.454695    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:31.457013    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:31.457224    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:31.477361    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:12:31.477450    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:31.492926    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:12:31.493003    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:31.504845    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:12:31.504923    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:31.515770    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:12:31.515840    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:31.526349    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:12:31.526415    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:31.536786    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:12:31.536858    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:31.546524    8746 logs.go:276] 0 containers: []
	W0717 11:12:31.546534    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:31.546586    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:31.563524    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:12:31.563541    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:12:31.563549    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:12:31.589208    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:12:31.589219    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:12:31.600699    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:12:31.600711    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:12:31.619729    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:12:31.619739    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:31.631388    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:31.631399    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:31.635471    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:12:31.635478    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:12:31.649622    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:12:31.649679    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:12:31.667210    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:12:31.667226    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:12:31.680681    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:12:31.680690    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:12:31.692439    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:31.692449    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:31.728073    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:12:31.728088    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:12:31.742179    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:12:31.742190    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:12:31.757374    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:12:31.757386    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:12:31.774951    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:31.774965    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:31.799758    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:31.799768    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:31.837745    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:12:31.837753    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:12:34.354363    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:39.356757    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:39.357307    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:39.386577    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:12:39.386718    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:39.411821    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:12:39.411903    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:39.424762    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:12:39.424847    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:39.435979    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:12:39.436054    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:39.447999    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:12:39.448073    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:39.458938    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:12:39.459001    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:39.468860    8746 logs.go:276] 0 containers: []
	W0717 11:12:39.468873    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:39.468936    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:39.479873    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:12:39.479890    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:39.479896    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:39.519446    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:39.519458    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:39.527106    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:39.527117    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:39.561835    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:12:39.561847    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:12:39.576086    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:12:39.576096    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:12:39.595125    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:39.595138    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:39.618213    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:12:39.618221    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:12:39.633316    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:12:39.633326    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:12:39.645223    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:12:39.645233    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:12:39.667068    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:12:39.667081    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:12:39.679983    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:12:39.679994    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:39.691693    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:12:39.691704    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:12:39.706348    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:12:39.706358    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:12:39.717628    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:12:39.717638    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:12:39.729569    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:12:39.729581    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:12:39.754292    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:12:39.754305    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:12:42.267776    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:47.270216    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:47.270385    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:47.283615    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:12:47.283696    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:47.294947    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:12:47.295015    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:47.305782    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:12:47.305864    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:47.316770    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:12:47.316846    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:47.326842    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:12:47.326905    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:47.337822    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:12:47.337886    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:47.348125    8746 logs.go:276] 0 containers: []
	W0717 11:12:47.348140    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:47.348195    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:47.358889    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:12:47.358906    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:12:47.358912    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:12:47.371588    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:12:47.371605    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:12:47.382475    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:47.382487    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:47.407883    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:47.407895    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:47.443223    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:12:47.443237    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:12:47.455493    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:12:47.455504    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:12:47.472490    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:12:47.472500    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:47.484534    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:12:47.484547    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:12:47.499070    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:12:47.499084    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:12:47.513909    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:12:47.513920    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:12:47.531862    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:12:47.531874    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:12:47.545270    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:47.545284    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:47.587312    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:12:47.587328    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:12:47.612587    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:12:47.612601    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:12:47.624232    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:47.624244    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:47.628588    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:12:47.628595    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:12:50.148704    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:55.150960    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:55.151193    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:55.180439    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:12:55.180531    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:55.200209    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:12:55.200290    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:55.212298    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:12:55.212373    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:55.222960    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:12:55.223035    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:55.233499    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:12:55.233574    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:55.251201    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:12:55.251271    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:55.261996    8746 logs.go:276] 0 containers: []
	W0717 11:12:55.262008    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:55.262065    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:55.275729    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:12:55.275746    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:55.275751    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:55.313196    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:12:55.313209    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:12:55.324499    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:55.324510    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:55.358603    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:12:55.358615    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:12:55.372695    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:12:55.372706    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:12:55.387476    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:12:55.387486    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:12:55.400811    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:12:55.400825    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:12:55.419990    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:12:55.420003    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:12:55.432581    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:55.432594    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:55.437358    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:12:55.437365    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:12:55.461907    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:12:55.461918    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:12:55.475491    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:12:55.475502    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:12:55.491112    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:12:55.491122    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:12:55.509901    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:12:55.509912    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:12:55.521363    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:55.521373    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:55.544625    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:12:55.544632    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:58.056920    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:03.058847    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:03.059053    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:03.084884    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:13:03.084998    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:03.109373    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:13:03.109444    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:03.120785    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:13:03.120858    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:03.131465    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:13:03.131560    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:03.142215    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:13:03.142297    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:03.153036    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:13:03.153102    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:03.163626    8746 logs.go:276] 0 containers: []
	W0717 11:13:03.163636    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:03.163697    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:03.174377    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:13:03.174394    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:13:03.174399    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:13:03.189064    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:13:03.189074    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:13:03.208201    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:13:03.208216    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:13:03.219861    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:03.219871    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:03.244364    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:03.244376    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:03.279120    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:13:03.279132    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:13:03.293353    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:13:03.293363    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:13:03.318224    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:13:03.318235    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:13:03.331201    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:13:03.331212    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:13:03.345494    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:13:03.345504    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:03.357596    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:03.357608    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:03.395491    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:03.395505    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:03.399535    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:13:03.399545    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:13:03.410588    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:13:03.410600    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:13:03.422487    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:13:03.422498    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:13:03.440054    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:13:03.440067    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:13:05.959149    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:10.959625    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:10.959824    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:10.977446    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:13:10.977540    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:10.990157    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:13:10.990235    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:11.005411    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:13:11.005476    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:11.016223    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:13:11.016301    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:11.027539    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:13:11.027601    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:11.039255    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:13:11.039318    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:11.049171    8746 logs.go:276] 0 containers: []
	W0717 11:13:11.049183    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:11.049246    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:11.060985    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:13:11.061004    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:13:11.061010    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:13:11.078687    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:13:11.078698    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:13:11.091265    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:11.091275    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:11.116078    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:11.116088    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:11.151424    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:13:11.151434    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:13:11.165512    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:13:11.165523    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:13:11.181074    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:13:11.181084    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:13:11.192331    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:13:11.192342    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:13:11.204696    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:11.204712    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:11.244651    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:11.244671    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:11.249537    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:13:11.249546    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:13:11.264367    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:13:11.264380    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:13:11.289167    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:13:11.289178    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:13:11.301641    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:13:11.301651    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:13:11.320146    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:13:11.320157    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:13:11.332162    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:13:11.332174    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:13.846242    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:18.848958    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:18.849410    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:18.889346    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:13:18.889483    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:18.911798    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:13:18.911911    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:18.927062    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:13:18.927133    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:18.940092    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:13:18.940171    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:18.951219    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:13:18.951296    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:18.962082    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:13:18.962148    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:18.972535    8746 logs.go:276] 0 containers: []
	W0717 11:13:18.972546    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:18.972605    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:18.984012    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:13:18.984028    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:13:18.984035    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:13:18.995622    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:13:18.995635    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:13:19.007423    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:13:19.007437    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:13:19.024689    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:13:19.024701    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:13:19.048514    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:19.048526    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:19.087176    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:13:19.087196    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:13:19.103616    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:13:19.103627    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:13:19.119541    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:13:19.119552    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:13:19.132513    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:19.132525    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:19.156164    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:19.156174    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:19.160359    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:13:19.160366    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:13:19.190028    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:13:19.190039    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:13:19.207113    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:13:19.207124    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:13:19.221400    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:19.221411    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:19.256758    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:13:19.256770    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:13:19.275070    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:13:19.275081    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:21.788989    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:26.791492    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:26.791806    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:26.832441    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:13:26.832570    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:26.852376    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:13:26.852474    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:26.866888    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:13:26.866973    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:26.881277    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:13:26.881351    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:26.892557    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:13:26.892624    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:26.905931    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:13:26.906001    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:26.917069    8746 logs.go:276] 0 containers: []
	W0717 11:13:26.917078    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:26.917130    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:26.928710    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:13:26.928739    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:26.928744    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:26.965280    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:13:26.965292    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:13:26.983789    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:13:26.983801    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:13:26.996817    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:13:26.996830    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:27.012913    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:27.012932    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:27.017902    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:27.017909    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:27.061044    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:13:27.061055    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:13:27.075563    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:13:27.075573    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:13:27.086818    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:13:27.086832    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:13:27.099479    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:13:27.099493    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:13:27.110797    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:27.110807    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:27.133348    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:13:27.133358    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:13:27.145832    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:13:27.145843    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:13:27.172076    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:13:27.172086    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:13:27.186592    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:13:27.186603    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:13:27.205058    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:13:27.205069    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:13:29.731664    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:34.734061    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:34.734323    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:34.758854    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:13:34.758959    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:34.774993    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:13:34.775071    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:34.788045    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:13:34.788106    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:34.798986    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:13:34.799045    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:34.809568    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:13:34.809635    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:34.820031    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:13:34.820103    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:34.830109    8746 logs.go:276] 0 containers: []
	W0717 11:13:34.830121    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:34.830183    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:34.840903    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:13:34.840922    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:34.840928    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:34.880565    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:13:34.880576    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:13:34.897535    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:13:34.897548    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:13:34.908698    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:13:34.908710    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:13:34.927155    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:13:34.927165    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:13:34.944090    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:13:34.944101    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:13:34.961619    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:13:34.961629    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:13:34.986447    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:13:34.986461    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:13:34.998065    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:34.998076    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:35.036325    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:13:35.036337    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:13:35.048274    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:13:35.048285    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:13:35.060030    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:35.060042    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:35.084425    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:13:35.084435    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:35.096972    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:35.096984    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:35.101662    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:13:35.101672    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:13:35.117345    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:13:35.117355    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:13:37.634548    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:42.635815    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:42.635992    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:42.654361    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:13:42.654449    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:42.668106    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:13:42.668183    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:42.679539    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:13:42.679609    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:42.690233    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:13:42.690306    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:42.701092    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:13:42.701164    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:42.711524    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:13:42.711596    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:42.721632    8746 logs.go:276] 0 containers: []
	W0717 11:13:42.721646    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:42.721707    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:42.731841    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:13:42.731858    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:13:42.731865    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:13:42.746219    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:13:42.746236    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:13:42.757883    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:13:42.757897    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:13:42.784108    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:13:42.784120    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:13:42.796321    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:42.796335    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:42.820396    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:13:42.820404    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:42.832286    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:13:42.832296    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:13:42.844306    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:13:42.844316    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:13:42.862698    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:13:42.862709    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:13:42.880361    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:42.880371    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:42.917204    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:42.917212    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:42.921340    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:42.921347    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:42.957665    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:13:42.957675    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:13:42.978694    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:13:42.978703    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:13:42.993358    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:13:42.993369    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:13:43.009179    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:13:43.009189    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:13:45.523143    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:50.524769    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:50.524940    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:50.538392    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:13:50.538469    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:50.553515    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:13:50.553585    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:50.565777    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:13:50.565844    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:50.576210    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:13:50.576282    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:50.586644    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:13:50.586711    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:50.597191    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:13:50.597256    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:50.608654    8746 logs.go:276] 0 containers: []
	W0717 11:13:50.608667    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:50.608728    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:50.620379    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:13:50.620396    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:50.620402    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:50.657681    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:13:50.657690    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:13:50.676592    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:13:50.676605    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:13:50.693422    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:50.693433    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:50.717417    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:13:50.717425    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:50.729132    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:50.729143    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:50.733638    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:50.733645    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:50.768680    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:13:50.768691    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:13:50.780571    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:13:50.780581    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:13:50.791944    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:13:50.791956    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:13:50.802862    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:13:50.802875    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:13:50.824950    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:13:50.824961    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:13:50.846494    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:13:50.846506    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:13:50.861212    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:13:50.861225    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:13:50.873136    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:13:50.873148    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:13:50.897532    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:13:50.897543    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:13:53.412064    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:58.414499    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:58.414698    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:58.432872    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:13:58.432968    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:58.450593    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:13:58.450661    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:58.462227    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:13:58.462304    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:58.472877    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:13:58.472946    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:58.483452    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:13:58.483520    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:58.493929    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:13:58.493997    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:58.504636    8746 logs.go:276] 0 containers: []
	W0717 11:13:58.504654    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:58.504715    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:58.518333    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:13:58.518347    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:13:58.518352    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:13:58.530850    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:58.530859    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:58.569093    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:13:58.569101    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:13:58.587689    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:13:58.587700    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:13:58.599655    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:58.599666    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:58.636369    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:13:58.636380    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:58.649526    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:13:58.649540    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:13:58.661701    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:58.661712    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:58.685740    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:58.685748    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:58.689736    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:13:58.689745    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:13:58.714495    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:13:58.714506    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:13:58.727333    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:13:58.727345    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:13:58.745528    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:13:58.745539    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:13:58.757010    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:13:58.757020    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:13:58.773162    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:13:58.773173    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:13:58.786849    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:13:58.786857    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:14:01.303118    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:06.305617    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:06.305778    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:14:06.320533    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:14:06.320610    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:14:06.332359    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:14:06.332428    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:14:06.343690    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:14:06.343758    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:14:06.354050    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:14:06.354127    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:14:06.364776    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:14:06.364849    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:14:06.375425    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:14:06.375486    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:14:06.385401    8746 logs.go:276] 0 containers: []
	W0717 11:14:06.385416    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:14:06.385470    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:14:06.396070    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:14:06.396092    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:14:06.396098    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:14:06.409834    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:14:06.409845    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:14:06.427399    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:14:06.427409    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:14:06.439924    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:14:06.439935    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:14:06.475235    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:14:06.475248    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:14:06.489839    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:14:06.489852    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:14:06.508133    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:14:06.508144    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:14:06.519873    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:14:06.519887    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:14:06.531552    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:14:06.531571    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:14:06.546705    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:14:06.546718    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:14:06.571799    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:14:06.571810    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:14:06.582793    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:14:06.582804    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:14:06.604494    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:14:06.604504    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:14:06.642144    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:14:06.642152    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:14:06.646116    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:14:06.646123    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:14:06.660686    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:14:06.660696    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:14:09.174426    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:14.174879    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:14.175088    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:14:14.192914    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:14:14.192997    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:14:14.213874    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:14:14.213945    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:14:14.224177    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:14:14.224248    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:14:14.235973    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:14:14.236045    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:14:14.246702    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:14:14.246769    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:14:14.257315    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:14:14.257390    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:14:14.267891    8746 logs.go:276] 0 containers: []
	W0717 11:14:14.267904    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:14:14.267962    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:14:14.278883    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:14:14.278902    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:14:14.278908    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:14:14.318270    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:14:14.318280    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:14:14.356414    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:14:14.356426    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:14:14.381552    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:14:14.381563    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:14:14.405156    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:14:14.405166    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:14:14.423309    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:14:14.423319    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:14:14.452466    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:14:14.452486    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:14:14.459527    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:14:14.459541    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:14:14.474886    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:14:14.474903    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:14:14.493559    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:14:14.493569    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:14:14.505290    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:14:14.505303    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:14:14.516821    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:14:14.516833    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:14:14.529700    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:14:14.529715    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:14:14.552686    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:14:14.552694    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:14:14.566739    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:14:14.566751    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:14:14.578795    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:14:14.578807    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:14:17.093366    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:22.095743    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:22.096035    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:14:22.122641    8746 logs.go:276] 2 containers: [28f9b708ba6d e372bb421024]
	I0717 11:14:22.122792    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:14:22.139555    8746 logs.go:276] 2 containers: [a607f8fd4ff0 45e9faca056f]
	I0717 11:14:22.139629    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:14:22.153269    8746 logs.go:276] 1 containers: [b2787a80d172]
	I0717 11:14:22.153342    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:14:22.164972    8746 logs.go:276] 2 containers: [e550f8f893fe 05d92b386885]
	I0717 11:14:22.165045    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:14:22.175378    8746 logs.go:276] 1 containers: [7a1dec545306]
	I0717 11:14:22.175447    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:14:22.186110    8746 logs.go:276] 2 containers: [5089e69ed752 4d18bd71336b]
	I0717 11:14:22.186180    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:14:22.196547    8746 logs.go:276] 0 containers: []
	W0717 11:14:22.196675    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:14:22.196734    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:14:22.215865    8746 logs.go:276] 1 containers: [ce5f7f7bbe48]
	I0717 11:14:22.215882    8746 logs.go:123] Gathering logs for kube-scheduler [05d92b386885] ...
	I0717 11:14:22.215887    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d92b386885"
	I0717 11:14:22.235300    8746 logs.go:123] Gathering logs for kube-proxy [7a1dec545306] ...
	I0717 11:14:22.235316    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a1dec545306"
	I0717 11:14:22.256695    8746 logs.go:123] Gathering logs for storage-provisioner [ce5f7f7bbe48] ...
	I0717 11:14:22.256705    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce5f7f7bbe48"
	I0717 11:14:22.272056    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:14:22.272068    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:14:22.295851    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:14:22.295860    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:14:22.309917    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:14:22.309931    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:14:22.314003    8746 logs.go:123] Gathering logs for kube-apiserver [28f9b708ba6d] ...
	I0717 11:14:22.314009    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28f9b708ba6d"
	I0717 11:14:22.327892    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:14:22.327907    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:14:22.363227    8746 logs.go:123] Gathering logs for etcd [a607f8fd4ff0] ...
	I0717 11:14:22.363242    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a607f8fd4ff0"
	I0717 11:14:22.377435    8746 logs.go:123] Gathering logs for kube-scheduler [e550f8f893fe] ...
	I0717 11:14:22.377445    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e550f8f893fe"
	I0717 11:14:22.390704    8746 logs.go:123] Gathering logs for kube-controller-manager [5089e69ed752] ...
	I0717 11:14:22.390718    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5089e69ed752"
	I0717 11:14:22.408916    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:14:22.408929    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:14:22.448991    8746 logs.go:123] Gathering logs for kube-apiserver [e372bb421024] ...
	I0717 11:14:22.449003    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e372bb421024"
	I0717 11:14:22.474562    8746 logs.go:123] Gathering logs for kube-controller-manager [4d18bd71336b] ...
	I0717 11:14:22.474577    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d18bd71336b"
	I0717 11:14:22.487337    8746 logs.go:123] Gathering logs for etcd [45e9faca056f] ...
	I0717 11:14:22.487347    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45e9faca056f"
	I0717 11:14:22.501870    8746 logs.go:123] Gathering logs for coredns [b2787a80d172] ...
	I0717 11:14:22.501880    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2787a80d172"
	I0717 11:14:25.015545    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:30.017411    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:30.017548    8746 kubeadm.go:597] duration metric: took 4m3.473516958s to restartPrimaryControlPlane
	W0717 11:14:30.017674    8746 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 11:14:30.017738    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0717 11:14:31.078532    8746 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.06077625s)
	I0717 11:14:31.078600    8746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 11:14:31.083445    8746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 11:14:31.086015    8746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 11:14:31.088674    8746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 11:14:31.088681    8746 kubeadm.go:157] found existing configuration files:
	
	I0717 11:14:31.088707    8746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/admin.conf
	I0717 11:14:31.091370    8746 kubeadm.go:163] "https://control-plane.minikube.internal:51504" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 11:14:31.091395    8746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 11:14:31.093837    8746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/kubelet.conf
	I0717 11:14:31.096398    8746 kubeadm.go:163] "https://control-plane.minikube.internal:51504" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 11:14:31.096418    8746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 11:14:31.099400    8746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/controller-manager.conf
	I0717 11:14:31.101927    8746 kubeadm.go:163] "https://control-plane.minikube.internal:51504" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 11:14:31.101948    8746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 11:14:31.104553    8746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/scheduler.conf
	I0717 11:14:31.107644    8746 kubeadm.go:163] "https://control-plane.minikube.internal:51504" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51504 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 11:14:31.107665    8746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 11:14:31.110617    8746 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 11:14:31.128645    8746 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0717 11:14:31.128672    8746 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 11:14:31.178772    8746 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 11:14:31.178825    8746 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 11:14:31.178901    8746 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 11:14:31.231927    8746 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 11:14:31.236096    8746 out.go:204]   - Generating certificates and keys ...
	I0717 11:14:31.236132    8746 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 11:14:31.236177    8746 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 11:14:31.236407    8746 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 11:14:31.236533    8746 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 11:14:31.236605    8746 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 11:14:31.236669    8746 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 11:14:31.236722    8746 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 11:14:31.236761    8746 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 11:14:31.236826    8746 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 11:14:31.236907    8746 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 11:14:31.236941    8746 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 11:14:31.236989    8746 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 11:14:31.263532    8746 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 11:14:31.325135    8746 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 11:14:31.453902    8746 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 11:14:31.548883    8746 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 11:14:31.587813    8746 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 11:14:31.588133    8746 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 11:14:31.588186    8746 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 11:14:31.674021    8746 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 11:14:31.681932    8746 out.go:204]   - Booting up control plane ...
	I0717 11:14:31.682044    8746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 11:14:31.682105    8746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 11:14:31.682152    8746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 11:14:31.682207    8746 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 11:14:31.682305    8746 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 11:14:36.185013    8746 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.504856 seconds
	I0717 11:14:36.185158    8746 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 11:14:36.190900    8746 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 11:14:36.704398    8746 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 11:14:36.704525    8746 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-058000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 11:14:37.209766    8746 kubeadm.go:310] [bootstrap-token] Using token: sfup86.4bhq6tagj8ecwh82
	I0717 11:14:37.214652    8746 out.go:204]   - Configuring RBAC rules ...
	I0717 11:14:37.214725    8746 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 11:14:37.214784    8746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 11:14:37.216564    8746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 11:14:37.221234    8746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 11:14:37.222350    8746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 11:14:37.223519    8746 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 11:14:37.227020    8746 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 11:14:37.388927    8746 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 11:14:37.615566    8746 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 11:14:37.616280    8746 kubeadm.go:310] 
	I0717 11:14:37.616314    8746 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 11:14:37.616318    8746 kubeadm.go:310] 
	I0717 11:14:37.616356    8746 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 11:14:37.616360    8746 kubeadm.go:310] 
	I0717 11:14:37.616373    8746 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 11:14:37.616407    8746 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 11:14:37.616453    8746 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 11:14:37.616459    8746 kubeadm.go:310] 
	I0717 11:14:37.616499    8746 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 11:14:37.616502    8746 kubeadm.go:310] 
	I0717 11:14:37.616562    8746 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 11:14:37.616566    8746 kubeadm.go:310] 
	I0717 11:14:37.616630    8746 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 11:14:37.616686    8746 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 11:14:37.616728    8746 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 11:14:37.616732    8746 kubeadm.go:310] 
	I0717 11:14:37.616785    8746 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 11:14:37.616857    8746 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 11:14:37.616863    8746 kubeadm.go:310] 
	I0717 11:14:37.616946    8746 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sfup86.4bhq6tagj8ecwh82 \
	I0717 11:14:37.617014    8746 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c24be85cc8a3b21770f1d422f860354652361b15e4e8167266dbe73d5c2037be \
	I0717 11:14:37.617027    8746 kubeadm.go:310] 	--control-plane 
	I0717 11:14:37.617030    8746 kubeadm.go:310] 
	I0717 11:14:37.617073    8746 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 11:14:37.617076    8746 kubeadm.go:310] 
	I0717 11:14:37.617113    8746 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sfup86.4bhq6tagj8ecwh82 \
	I0717 11:14:37.617238    8746 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c24be85cc8a3b21770f1d422f860354652361b15e4e8167266dbe73d5c2037be 
	I0717 11:14:37.617292    8746 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 11:14:37.617300    8746 cni.go:84] Creating CNI manager for ""
	I0717 11:14:37.617310    8746 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:14:37.625419    8746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 11:14:37.629473    8746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 11:14:37.632878    8746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 11:14:37.638144    8746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 11:14:37.638194    8746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 11:14:37.638207    8746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-058000 minikube.k8s.io/updated_at=2024_07_17T11_14_37_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=stopped-upgrade-058000 minikube.k8s.io/primary=true
	I0717 11:14:37.641387    8746 ops.go:34] apiserver oom_adj: -16
	I0717 11:14:37.683675    8746 kubeadm.go:1113] duration metric: took 45.52175ms to wait for elevateKubeSystemPrivileges
	I0717 11:14:37.683752    8746 kubeadm.go:394] duration metric: took 4m11.152575458s to StartCluster
	I0717 11:14:37.683766    8746 settings.go:142] acquiring lock: {Name:mkb2460e5e181fb6243e4d9c07c303cabf02ebce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:14:37.683855    8746 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:14:37.684262    8746 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/kubeconfig: {Name:mk593058234481727c8f9c6b6ce8d5b26e4d4302 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:14:37.684479    8746 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:14:37.684565    8746 config.go:182] Loaded profile config "stopped-upgrade-058000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:14:37.684541    8746 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 11:14:37.684579    8746 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-058000"
	I0717 11:14:37.684591    8746 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-058000"
	W0717 11:14:37.684594    8746 addons.go:243] addon storage-provisioner should already be in state true
	I0717 11:14:37.684598    8746 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-058000"
	I0717 11:14:37.684605    8746 host.go:66] Checking if "stopped-upgrade-058000" exists ...
	I0717 11:14:37.684610    8746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-058000"
	I0717 11:14:37.685774    8746 kapi.go:59] client config for stopped-upgrade-058000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/stopped-upgrade-058000/client.key", CAFile:"/Users/jenkins/minikube-integration/19282-6331/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106267730), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 11:14:37.685908    8746 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-058000"
	W0717 11:14:37.685913    8746 addons.go:243] addon default-storageclass should already be in state true
	I0717 11:14:37.685920    8746 host.go:66] Checking if "stopped-upgrade-058000" exists ...
	I0717 11:14:37.688531    8746 out.go:177] * Verifying Kubernetes components...
	I0717 11:14:37.688834    8746 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 11:14:37.692606    8746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 11:14:37.692615    8746 sshutil.go:53] new ssh client: &{IP:localhost Port:51470 SSHKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/stopped-upgrade-058000/id_rsa Username:docker}
	I0717 11:14:37.696395    8746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:14:37.700408    8746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:14:37.704499    8746 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 11:14:37.704504    8746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 11:14:37.704510    8746 sshutil.go:53] new ssh client: &{IP:localhost Port:51470 SSHKeyPath:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/stopped-upgrade-058000/id_rsa Username:docker}
	I0717 11:14:37.793394    8746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 11:14:37.798729    8746 api_server.go:52] waiting for apiserver process to appear ...
	I0717 11:14:37.798771    8746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:14:37.802545    8746 api_server.go:72] duration metric: took 118.055375ms to wait for apiserver process to appear ...
	I0717 11:14:37.802552    8746 api_server.go:88] waiting for apiserver healthz status ...
	I0717 11:14:37.802558    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:37.827033    8746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 11:14:37.841093    8746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 11:14:42.804782    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:42.804871    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:47.805733    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:47.805797    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:52.806332    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:52.806363    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:57.807059    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:57.807082    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:15:02.807909    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:15:02.807935    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:15:07.808501    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:15:07.808540    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0717 11:15:08.165063    8746 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0717 11:15:08.168302    8746 out.go:177] * Enabled addons: storage-provisioner
	I0717 11:15:08.180318    8746 addons.go:510] duration metric: took 30.495729417s for enable addons: enabled=[storage-provisioner]
	I0717 11:15:12.809272    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:15:12.809294    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:15:17.810709    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:15:17.810749    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:15:22.812384    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:15:22.812406    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:15:27.814551    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:15:27.814589    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:15:32.816812    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:15:32.816841    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:15:37.819055    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:15:37.819163    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:15:37.830138    8746 logs.go:276] 1 containers: [4671000ac890]
	I0717 11:15:37.830204    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:15:37.840901    8746 logs.go:276] 1 containers: [f7822efda439]
	I0717 11:15:37.840971    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:15:37.851523    8746 logs.go:276] 2 containers: [62d13acf4f90 a8fde36854fe]
	I0717 11:15:37.851581    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:15:37.867760    8746 logs.go:276] 1 containers: [f9843b067c9d]
	I0717 11:15:37.867829    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:15:37.878488    8746 logs.go:276] 1 containers: [1714ab36fb9f]
	I0717 11:15:37.878556    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:15:37.888778    8746 logs.go:276] 1 containers: [1e0385446fac]
	I0717 11:15:37.888846    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:15:37.898884    8746 logs.go:276] 0 containers: []
	W0717 11:15:37.898896    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:15:37.898948    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:15:37.909183    8746 logs.go:276] 1 containers: [307213bf7b75]
	I0717 11:15:37.909198    8746 logs.go:123] Gathering logs for kube-controller-manager [1e0385446fac] ...
	I0717 11:15:37.909203    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e0385446fac"
	I0717 11:15:37.930107    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:15:37.930117    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:15:37.955313    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:15:37.955321    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:15:37.989126    8746 logs.go:123] Gathering logs for coredns [62d13acf4f90] ...
	I0717 11:15:37.989142    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d13acf4f90"
	I0717 11:15:38.001141    8746 logs.go:123] Gathering logs for kube-scheduler [f9843b067c9d] ...
	I0717 11:15:38.001151    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9843b067c9d"
	I0717 11:15:38.015603    8746 logs.go:123] Gathering logs for kube-proxy [1714ab36fb9f] ...
	I0717 11:15:38.015616    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1714ab36fb9f"
	I0717 11:15:38.028811    8746 logs.go:123] Gathering logs for coredns [a8fde36854fe] ...
	I0717 11:15:38.028820    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fde36854fe"
	I0717 11:15:38.040327    8746 logs.go:123] Gathering logs for storage-provisioner [307213bf7b75] ...
	I0717 11:15:38.040341    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307213bf7b75"
	I0717 11:15:38.051786    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:15:38.051800    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:15:38.064831    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:15:38.064842    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:15:38.100960    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:15:38.100970    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:15:38.105359    8746 logs.go:123] Gathering logs for kube-apiserver [4671000ac890] ...
	I0717 11:15:38.105368    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4671000ac890"
	I0717 11:15:38.119240    8746 logs.go:123] Gathering logs for etcd [f7822efda439] ...
	I0717 11:15:38.119251    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7822efda439"
	I0717 11:15:40.635242    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:15:45.637628    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:15:45.637884    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:15:45.659315    8746 logs.go:276] 1 containers: [4671000ac890]
	I0717 11:15:45.659426    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:15:45.674614    8746 logs.go:276] 1 containers: [f7822efda439]
	I0717 11:15:45.674685    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:15:45.687306    8746 logs.go:276] 2 containers: [62d13acf4f90 a8fde36854fe]
	I0717 11:15:45.687378    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:15:45.698741    8746 logs.go:276] 1 containers: [f9843b067c9d]
	I0717 11:15:45.698809    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:15:45.709335    8746 logs.go:276] 1 containers: [1714ab36fb9f]
	I0717 11:15:45.709395    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:15:45.724625    8746 logs.go:276] 1 containers: [1e0385446fac]
	I0717 11:15:45.724688    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:15:45.734564    8746 logs.go:276] 0 containers: []
	W0717 11:15:45.734581    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:15:45.734638    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:15:45.750429    8746 logs.go:276] 1 containers: [307213bf7b75]
	I0717 11:15:45.750445    8746 logs.go:123] Gathering logs for kube-controller-manager [1e0385446fac] ...
	I0717 11:15:45.750451    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e0385446fac"
	I0717 11:15:45.768041    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:15:45.768055    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:15:45.803861    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:15:45.803872    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:15:45.808092    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:15:45.808098    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:15:45.841666    8746 logs.go:123] Gathering logs for kube-apiserver [4671000ac890] ...
	I0717 11:15:45.841681    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4671000ac890"
	I0717 11:15:45.856216    8746 logs.go:123] Gathering logs for coredns [a8fde36854fe] ...
	I0717 11:15:45.856229    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fde36854fe"
	I0717 11:15:45.867605    8746 logs.go:123] Gathering logs for kube-scheduler [f9843b067c9d] ...
	I0717 11:15:45.867620    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9843b067c9d"
	I0717 11:15:45.882691    8746 logs.go:123] Gathering logs for kube-proxy [1714ab36fb9f] ...
	I0717 11:15:45.882705    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1714ab36fb9f"
	I0717 11:15:45.896306    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:15:45.896319    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:15:45.921801    8746 logs.go:123] Gathering logs for etcd [f7822efda439] ...
	I0717 11:15:45.921816    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7822efda439"
	I0717 11:15:45.937880    8746 logs.go:123] Gathering logs for coredns [62d13acf4f90] ...
	I0717 11:15:45.937893    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d13acf4f90"
	I0717 11:15:45.949476    8746 logs.go:123] Gathering logs for storage-provisioner [307213bf7b75] ...
	I0717 11:15:45.949489    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307213bf7b75"
	I0717 11:15:45.960717    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:15:45.960731    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:15:48.474811    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:15:53.477145    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:15:53.477447    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:15:53.509870    8746 logs.go:276] 1 containers: [4671000ac890]
	I0717 11:15:53.509952    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:15:53.524777    8746 logs.go:276] 1 containers: [f7822efda439]
	I0717 11:15:53.524839    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:15:53.537022    8746 logs.go:276] 2 containers: [62d13acf4f90 a8fde36854fe]
	I0717 11:15:53.537084    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:15:53.549017    8746 logs.go:276] 1 containers: [f9843b067c9d]
	I0717 11:15:53.549080    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:15:53.559635    8746 logs.go:276] 1 containers: [1714ab36fb9f]
	I0717 11:15:53.559695    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:15:53.570549    8746 logs.go:276] 1 containers: [1e0385446fac]
	I0717 11:15:53.570620    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:15:53.580502    8746 logs.go:276] 0 containers: []
	W0717 11:15:53.580518    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:15:53.580567    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:15:53.590934    8746 logs.go:276] 1 containers: [307213bf7b75]
	I0717 11:15:53.590948    8746 logs.go:123] Gathering logs for coredns [a8fde36854fe] ...
	I0717 11:15:53.590953    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fde36854fe"
	I0717 11:15:53.602422    8746 logs.go:123] Gathering logs for kube-controller-manager [1e0385446fac] ...
	I0717 11:15:53.602439    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e0385446fac"
	I0717 11:15:53.620854    8746 logs.go:123] Gathering logs for storage-provisioner [307213bf7b75] ...
	I0717 11:15:53.620865    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307213bf7b75"
	I0717 11:15:53.632646    8746 logs.go:123] Gathering logs for kube-apiserver [4671000ac890] ...
	I0717 11:15:53.632660    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4671000ac890"
	I0717 11:15:53.650437    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:15:53.650451    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:15:53.655405    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:15:53.655412    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:15:53.690339    8746 logs.go:123] Gathering logs for etcd [f7822efda439] ...
	I0717 11:15:53.690352    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7822efda439"
	I0717 11:15:53.704763    8746 logs.go:123] Gathering logs for coredns [62d13acf4f90] ...
	I0717 11:15:53.704778    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d13acf4f90"
	I0717 11:15:53.716338    8746 logs.go:123] Gathering logs for kube-scheduler [f9843b067c9d] ...
	I0717 11:15:53.716352    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9843b067c9d"
	I0717 11:15:53.731373    8746 logs.go:123] Gathering logs for kube-proxy [1714ab36fb9f] ...
	I0717 11:15:53.731383    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1714ab36fb9f"
	I0717 11:15:53.748995    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:15:53.749005    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:15:53.773437    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:15:53.773445    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:15:53.809531    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:15:53.809546    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:15:56.324039    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:16:01.325551    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:16:01.325815    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:16:01.352544    8746 logs.go:276] 1 containers: [4671000ac890]
	I0717 11:16:01.352659    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:16:01.369771    8746 logs.go:276] 1 containers: [f7822efda439]
	I0717 11:16:01.369866    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:16:01.383377    8746 logs.go:276] 2 containers: [62d13acf4f90 a8fde36854fe]
	I0717 11:16:01.383443    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:16:01.394908    8746 logs.go:276] 1 containers: [f9843b067c9d]
	I0717 11:16:01.394980    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:16:01.405441    8746 logs.go:276] 1 containers: [1714ab36fb9f]
	I0717 11:16:01.405504    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:16:01.416167    8746 logs.go:276] 1 containers: [1e0385446fac]
	I0717 11:16:01.416229    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:16:01.426515    8746 logs.go:276] 0 containers: []
	W0717 11:16:01.426526    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:16:01.426577    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:16:01.437541    8746 logs.go:276] 1 containers: [307213bf7b75]
	I0717 11:16:01.437557    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:16:01.437562    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:16:01.462736    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:16:01.462746    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:16:01.498776    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:16:01.498786    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:16:01.503506    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:16:01.503515    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:16:01.542866    8746 logs.go:123] Gathering logs for coredns [a8fde36854fe] ...
	I0717 11:16:01.542875    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fde36854fe"
	I0717 11:16:01.554624    8746 logs.go:123] Gathering logs for kube-proxy [1714ab36fb9f] ...
	I0717 11:16:01.554638    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1714ab36fb9f"
	I0717 11:16:01.566364    8746 logs.go:123] Gathering logs for kube-controller-manager [1e0385446fac] ...
	I0717 11:16:01.566376    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e0385446fac"
	I0717 11:16:01.584017    8746 logs.go:123] Gathering logs for kube-apiserver [4671000ac890] ...
	I0717 11:16:01.584028    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4671000ac890"
	I0717 11:16:01.598523    8746 logs.go:123] Gathering logs for etcd [f7822efda439] ...
	I0717 11:16:01.598532    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7822efda439"
	I0717 11:16:01.612555    8746 logs.go:123] Gathering logs for coredns [62d13acf4f90] ...
	I0717 11:16:01.612566    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d13acf4f90"
	I0717 11:16:01.624606    8746 logs.go:123] Gathering logs for kube-scheduler [f9843b067c9d] ...
	I0717 11:16:01.624618    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9843b067c9d"
	I0717 11:16:01.639732    8746 logs.go:123] Gathering logs for storage-provisioner [307213bf7b75] ...
	I0717 11:16:01.639743    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307213bf7b75"
	I0717 11:16:01.651139    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:16:01.651150    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:16:04.165071    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:16:09.167204    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:16:09.167439    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:16:09.190938    8746 logs.go:276] 1 containers: [4671000ac890]
	I0717 11:16:09.191039    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:16:09.207861    8746 logs.go:276] 1 containers: [f7822efda439]
	I0717 11:16:09.207939    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:16:09.221013    8746 logs.go:276] 2 containers: [62d13acf4f90 a8fde36854fe]
	I0717 11:16:09.221072    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:16:09.231799    8746 logs.go:276] 1 containers: [f9843b067c9d]
	I0717 11:16:09.231864    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:16:09.241822    8746 logs.go:276] 1 containers: [1714ab36fb9f]
	I0717 11:16:09.241891    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:16:09.252122    8746 logs.go:276] 1 containers: [1e0385446fac]
	I0717 11:16:09.252181    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:16:09.262731    8746 logs.go:276] 0 containers: []
	W0717 11:16:09.262743    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:16:09.262792    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:16:09.273318    8746 logs.go:276] 1 containers: [307213bf7b75]
	I0717 11:16:09.273335    8746 logs.go:123] Gathering logs for kube-scheduler [f9843b067c9d] ...
	I0717 11:16:09.273341    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9843b067c9d"
	I0717 11:16:09.291291    8746 logs.go:123] Gathering logs for kube-proxy [1714ab36fb9f] ...
	I0717 11:16:09.291302    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1714ab36fb9f"
	I0717 11:16:09.303112    8746 logs.go:123] Gathering logs for kube-controller-manager [1e0385446fac] ...
	I0717 11:16:09.303124    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e0385446fac"
	I0717 11:16:09.326635    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:16:09.326647    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:16:09.362205    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:16:09.362212    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:16:09.366754    8746 logs.go:123] Gathering logs for kube-apiserver [4671000ac890] ...
	I0717 11:16:09.366763    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4671000ac890"
	I0717 11:16:09.380885    8746 logs.go:123] Gathering logs for coredns [62d13acf4f90] ...
	I0717 11:16:09.380896    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d13acf4f90"
	I0717 11:16:09.392862    8746 logs.go:123] Gathering logs for coredns [a8fde36854fe] ...
	I0717 11:16:09.392875    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fde36854fe"
	I0717 11:16:09.404497    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:16:09.404510    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:16:09.431079    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:16:09.431090    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:16:09.442550    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:16:09.442564    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:16:09.483973    8746 logs.go:123] Gathering logs for etcd [f7822efda439] ...
	I0717 11:16:09.483985    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7822efda439"
	I0717 11:16:09.498259    8746 logs.go:123] Gathering logs for storage-provisioner [307213bf7b75] ...
	I0717 11:16:09.498271    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307213bf7b75"
	I0717 11:16:12.013204    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:16:17.015551    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:16:17.015659    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:16:17.027839    8746 logs.go:276] 1 containers: [4671000ac890]
	I0717 11:16:17.027916    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:16:17.038433    8746 logs.go:276] 1 containers: [f7822efda439]
	I0717 11:16:17.038505    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:16:17.053202    8746 logs.go:276] 2 containers: [62d13acf4f90 a8fde36854fe]
	I0717 11:16:17.053272    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:16:17.063533    8746 logs.go:276] 1 containers: [f9843b067c9d]
	I0717 11:16:17.063606    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:16:17.073938    8746 logs.go:276] 1 containers: [1714ab36fb9f]
	I0717 11:16:17.074008    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:16:17.084290    8746 logs.go:276] 1 containers: [1e0385446fac]
	I0717 11:16:17.084358    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:16:17.096786    8746 logs.go:276] 0 containers: []
	W0717 11:16:17.096797    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:16:17.096858    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:16:17.107257    8746 logs.go:276] 1 containers: [307213bf7b75]
	I0717 11:16:17.107272    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:16:17.107279    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:16:17.141555    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:16:17.141564    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:16:17.145862    8746 logs.go:123] Gathering logs for coredns [a8fde36854fe] ...
	I0717 11:16:17.145872    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fde36854fe"
	I0717 11:16:17.158777    8746 logs.go:123] Gathering logs for kube-scheduler [f9843b067c9d] ...
	I0717 11:16:17.158787    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9843b067c9d"
	I0717 11:16:17.174487    8746 logs.go:123] Gathering logs for kube-controller-manager [1e0385446fac] ...
	I0717 11:16:17.174500    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e0385446fac"
	I0717 11:16:17.191630    8746 logs.go:123] Gathering logs for storage-provisioner [307213bf7b75] ...
	I0717 11:16:17.191641    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307213bf7b75"
	I0717 11:16:17.203154    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:16:17.203165    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:16:17.228395    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:16:17.228404    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:16:17.239314    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:16:17.239326    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:16:17.274482    8746 logs.go:123] Gathering logs for kube-apiserver [4671000ac890] ...
	I0717 11:16:17.274498    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4671000ac890"
	I0717 11:16:17.292650    8746 logs.go:123] Gathering logs for etcd [f7822efda439] ...
	I0717 11:16:17.292659    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7822efda439"
	I0717 11:16:17.308430    8746 logs.go:123] Gathering logs for coredns [62d13acf4f90] ...
	I0717 11:16:17.308440    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d13acf4f90"
	I0717 11:16:17.320600    8746 logs.go:123] Gathering logs for kube-proxy [1714ab36fb9f] ...
	I0717 11:16:17.320610    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1714ab36fb9f"
	I0717 11:16:19.835415    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:16:24.837805    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:16:24.838002    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:16:24.859108    8746 logs.go:276] 1 containers: [4671000ac890]
	I0717 11:16:24.859200    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:16:24.873875    8746 logs.go:276] 1 containers: [f7822efda439]
	I0717 11:16:24.873948    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:16:24.885750    8746 logs.go:276] 2 containers: [62d13acf4f90 a8fde36854fe]
	I0717 11:16:24.885810    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:16:24.896550    8746 logs.go:276] 1 containers: [f9843b067c9d]
	I0717 11:16:24.896617    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:16:24.907013    8746 logs.go:276] 1 containers: [1714ab36fb9f]
	I0717 11:16:24.907074    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:16:24.916952    8746 logs.go:276] 1 containers: [1e0385446fac]
	I0717 11:16:24.917018    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:16:24.931430    8746 logs.go:276] 0 containers: []
	W0717 11:16:24.931441    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:16:24.931497    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:16:24.942157    8746 logs.go:276] 1 containers: [307213bf7b75]
	I0717 11:16:24.942171    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:16:24.942177    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:16:24.947063    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:16:24.947071    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:16:24.984030    8746 logs.go:123] Gathering logs for coredns [62d13acf4f90] ...
	I0717 11:16:24.984042    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d13acf4f90"
	I0717 11:16:24.995939    8746 logs.go:123] Gathering logs for kube-scheduler [f9843b067c9d] ...
	I0717 11:16:24.995953    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9843b067c9d"
	I0717 11:16:25.010628    8746 logs.go:123] Gathering logs for kube-proxy [1714ab36fb9f] ...
	I0717 11:16:25.010641    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1714ab36fb9f"
	I0717 11:16:25.022059    8746 logs.go:123] Gathering logs for kube-controller-manager [1e0385446fac] ...
	I0717 11:16:25.022070    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e0385446fac"
	I0717 11:16:25.039448    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:16:25.039459    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:16:25.050993    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:16:25.051004    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:16:25.086554    8746 logs.go:123] Gathering logs for etcd [f7822efda439] ...
	I0717 11:16:25.086566    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7822efda439"
	I0717 11:16:25.123050    8746 logs.go:123] Gathering logs for coredns [a8fde36854fe] ...
	I0717 11:16:25.123063    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fde36854fe"
	I0717 11:16:25.135622    8746 logs.go:123] Gathering logs for storage-provisioner [307213bf7b75] ...
	I0717 11:16:25.135633    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307213bf7b75"
	I0717 11:16:25.148100    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:16:25.148110    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:16:25.173420    8746 logs.go:123] Gathering logs for kube-apiserver [4671000ac890] ...
	I0717 11:16:25.173430    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4671000ac890"
	I0717 11:16:27.690258    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:16:32.692545    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:16:32.692778    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:16:32.712591    8746 logs.go:276] 1 containers: [4671000ac890]
	I0717 11:16:32.712679    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:16:32.727127    8746 logs.go:276] 1 containers: [f7822efda439]
	I0717 11:16:32.727208    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:16:32.739396    8746 logs.go:276] 2 containers: [62d13acf4f90 a8fde36854fe]
	I0717 11:16:32.739472    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:16:32.750957    8746 logs.go:276] 1 containers: [f9843b067c9d]
	I0717 11:16:32.751032    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:16:32.761195    8746 logs.go:276] 1 containers: [1714ab36fb9f]
	I0717 11:16:32.761261    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:16:32.771493    8746 logs.go:276] 1 containers: [1e0385446fac]
	I0717 11:16:32.771562    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:16:32.781578    8746 logs.go:276] 0 containers: []
	W0717 11:16:32.781590    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:16:32.781653    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:16:32.796979    8746 logs.go:276] 1 containers: [307213bf7b75]
	I0717 11:16:32.796994    8746 logs.go:123] Gathering logs for kube-apiserver [4671000ac890] ...
	I0717 11:16:32.796999    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4671000ac890"
	I0717 11:16:32.811305    8746 logs.go:123] Gathering logs for etcd [f7822efda439] ...
	I0717 11:16:32.811317    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7822efda439"
	I0717 11:16:32.824976    8746 logs.go:123] Gathering logs for coredns [a8fde36854fe] ...
	I0717 11:16:32.824986    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fde36854fe"
	I0717 11:16:32.836615    8746 logs.go:123] Gathering logs for kube-scheduler [f9843b067c9d] ...
	I0717 11:16:32.836626    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9843b067c9d"
	I0717 11:16:32.851057    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:16:32.851068    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:16:32.876298    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:16:32.876308    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:16:32.888927    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:16:32.888938    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:16:32.924058    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:16:32.924069    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:16:32.928279    8746 logs.go:123] Gathering logs for kube-proxy [1714ab36fb9f] ...
	I0717 11:16:32.928284    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1714ab36fb9f"
	I0717 11:16:32.941855    8746 logs.go:123] Gathering logs for kube-controller-manager [1e0385446fac] ...
	I0717 11:16:32.941866    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e0385446fac"
	I0717 11:16:32.959437    8746 logs.go:123] Gathering logs for storage-provisioner [307213bf7b75] ...
	I0717 11:16:32.959447    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307213bf7b75"
	I0717 11:16:32.972355    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:16:32.972364    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:16:33.008596    8746 logs.go:123] Gathering logs for coredns [62d13acf4f90] ...
	I0717 11:16:33.008608    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d13acf4f90"
	I0717 11:16:35.522384    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:16:40.524649    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:16:40.524811    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:16:40.535738    8746 logs.go:276] 1 containers: [4671000ac890]
	I0717 11:16:40.535812    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:16:40.545997    8746 logs.go:276] 1 containers: [f7822efda439]
	I0717 11:16:40.546071    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:16:40.556150    8746 logs.go:276] 2 containers: [62d13acf4f90 a8fde36854fe]
	I0717 11:16:40.556211    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:16:40.566476    8746 logs.go:276] 1 containers: [f9843b067c9d]
	I0717 11:16:40.566551    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:16:40.576709    8746 logs.go:276] 1 containers: [1714ab36fb9f]
	I0717 11:16:40.576774    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:16:40.586750    8746 logs.go:276] 1 containers: [1e0385446fac]
	I0717 11:16:40.586816    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:16:40.597919    8746 logs.go:276] 0 containers: []
	W0717 11:16:40.597931    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:16:40.597986    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:16:40.607950    8746 logs.go:276] 1 containers: [307213bf7b75]
	I0717 11:16:40.607964    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:16:40.607969    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:16:40.633200    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:16:40.633208    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:16:40.644710    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:16:40.644720    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:16:40.680463    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:16:40.680473    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:16:40.684682    8746 logs.go:123] Gathering logs for kube-apiserver [4671000ac890] ...
	I0717 11:16:40.684690    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4671000ac890"
	I0717 11:16:40.699074    8746 logs.go:123] Gathering logs for kube-controller-manager [1e0385446fac] ...
	I0717 11:16:40.699085    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e0385446fac"
	I0717 11:16:40.716468    8746 logs.go:123] Gathering logs for kube-scheduler [f9843b067c9d] ...
	I0717 11:16:40.716477    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9843b067c9d"
	I0717 11:16:40.730940    8746 logs.go:123] Gathering logs for kube-proxy [1714ab36fb9f] ...
	I0717 11:16:40.730950    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1714ab36fb9f"
	I0717 11:16:40.742907    8746 logs.go:123] Gathering logs for storage-provisioner [307213bf7b75] ...
	I0717 11:16:40.742922    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307213bf7b75"
	I0717 11:16:40.755087    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:16:40.755103    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:16:40.792398    8746 logs.go:123] Gathering logs for etcd [f7822efda439] ...
	I0717 11:16:40.792410    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7822efda439"
	I0717 11:16:40.806209    8746 logs.go:123] Gathering logs for coredns [62d13acf4f90] ...
	I0717 11:16:40.806225    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d13acf4f90"
	I0717 11:16:40.817856    8746 logs.go:123] Gathering logs for coredns [a8fde36854fe] ...
	I0717 11:16:40.817868    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fde36854fe"
	I0717 11:16:43.331418    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:16:48.333696    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:16:48.333793    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:16:48.345088    8746 logs.go:276] 1 containers: [4671000ac890]
	I0717 11:16:48.345164    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:16:48.355670    8746 logs.go:276] 1 containers: [f7822efda439]
	I0717 11:16:48.355734    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:16:48.366430    8746 logs.go:276] 2 containers: [62d13acf4f90 a8fde36854fe]
	I0717 11:16:48.366502    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:16:48.376765    8746 logs.go:276] 1 containers: [f9843b067c9d]
	I0717 11:16:48.376837    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:16:48.387049    8746 logs.go:276] 1 containers: [1714ab36fb9f]
	I0717 11:16:48.387117    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:16:48.397322    8746 logs.go:276] 1 containers: [1e0385446fac]
	I0717 11:16:48.397394    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:16:48.407460    8746 logs.go:276] 0 containers: []
	W0717 11:16:48.407474    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:16:48.407525    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:16:48.417996    8746 logs.go:276] 1 containers: [307213bf7b75]
	I0717 11:16:48.418015    8746 logs.go:123] Gathering logs for kube-controller-manager [1e0385446fac] ...
	I0717 11:16:48.418021    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e0385446fac"
	I0717 11:16:48.435406    8746 logs.go:123] Gathering logs for storage-provisioner [307213bf7b75] ...
	I0717 11:16:48.435420    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307213bf7b75"
	I0717 11:16:48.446686    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:16:48.446699    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:16:48.458119    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:16:48.458128    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:16:48.503506    8746 logs.go:123] Gathering logs for etcd [f7822efda439] ...
	I0717 11:16:48.503518    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7822efda439"
	I0717 11:16:48.521574    8746 logs.go:123] Gathering logs for coredns [a8fde36854fe] ...
	I0717 11:16:48.521585    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fde36854fe"
	I0717 11:16:48.538178    8746 logs.go:123] Gathering logs for kube-proxy [1714ab36fb9f] ...
	I0717 11:16:48.538191    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1714ab36fb9f"
	I0717 11:16:48.550186    8746 logs.go:123] Gathering logs for kube-scheduler [f9843b067c9d] ...
	I0717 11:16:48.550197    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9843b067c9d"
	I0717 11:16:48.564353    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:16:48.564365    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:16:48.587802    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:16:48.587812    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:16:48.621534    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:16:48.621546    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:16:48.625654    8746 logs.go:123] Gathering logs for kube-apiserver [4671000ac890] ...
	I0717 11:16:48.625663    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4671000ac890"
	I0717 11:16:48.647437    8746 logs.go:123] Gathering logs for coredns [62d13acf4f90] ...
	I0717 11:16:48.647446    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d13acf4f90"
	I0717 11:16:51.161438    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:16:56.161970    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:16:56.162086    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:16:56.174660    8746 logs.go:276] 1 containers: [4671000ac890]
	I0717 11:16:56.174737    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:16:56.185947    8746 logs.go:276] 1 containers: [f7822efda439]
	I0717 11:16:56.186024    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:16:56.197952    8746 logs.go:276] 4 containers: [697cc48d12e6 0a7fdec260c5 62d13acf4f90 a8fde36854fe]
	I0717 11:16:56.198025    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:16:56.209348    8746 logs.go:276] 1 containers: [f9843b067c9d]
	I0717 11:16:56.209415    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:16:56.220034    8746 logs.go:276] 1 containers: [1714ab36fb9f]
	I0717 11:16:56.220102    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:16:56.230737    8746 logs.go:276] 1 containers: [1e0385446fac]
	I0717 11:16:56.230805    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:16:56.241289    8746 logs.go:276] 0 containers: []
	W0717 11:16:56.241299    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:16:56.241356    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:16:56.251911    8746 logs.go:276] 1 containers: [307213bf7b75]
	I0717 11:16:56.251928    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:16:56.251934    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:16:56.285772    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:16:56.285782    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:16:56.322294    8746 logs.go:123] Gathering logs for kube-controller-manager [1e0385446fac] ...
	I0717 11:16:56.322307    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e0385446fac"
	I0717 11:16:56.340507    8746 logs.go:123] Gathering logs for storage-provisioner [307213bf7b75] ...
	I0717 11:16:56.340521    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307213bf7b75"
	I0717 11:16:56.352379    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:16:56.352390    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:16:56.356615    8746 logs.go:123] Gathering logs for coredns [62d13acf4f90] ...
	I0717 11:16:56.356621    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d13acf4f90"
	I0717 11:16:56.368050    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:16:56.368064    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:16:56.391733    8746 logs.go:123] Gathering logs for kube-apiserver [4671000ac890] ...
	I0717 11:16:56.391741    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4671000ac890"
	I0717 11:16:56.406386    8746 logs.go:123] Gathering logs for etcd [f7822efda439] ...
	I0717 11:16:56.406397    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7822efda439"
	I0717 11:16:56.420814    8746 logs.go:123] Gathering logs for coredns [a8fde36854fe] ...
	I0717 11:16:56.420831    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fde36854fe"
	I0717 11:16:56.432628    8746 logs.go:123] Gathering logs for kube-proxy [1714ab36fb9f] ...
	I0717 11:16:56.432643    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1714ab36fb9f"
	I0717 11:16:56.443960    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:16:56.443973    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:16:56.455854    8746 logs.go:123] Gathering logs for coredns [697cc48d12e6] ...
	I0717 11:16:56.455870    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697cc48d12e6"
	I0717 11:16:56.467414    8746 logs.go:123] Gathering logs for coredns [0a7fdec260c5] ...
	I0717 11:16:56.467425    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7fdec260c5"
	I0717 11:16:56.479632    8746 logs.go:123] Gathering logs for kube-scheduler [f9843b067c9d] ...
	I0717 11:16:56.479644    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9843b067c9d"
	I0717 11:16:58.999210    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:17:04.001561    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:17:04.001780    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:17:04.018338    8746 logs.go:276] 1 containers: [4671000ac890]
	I0717 11:17:04.018422    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:17:04.030840    8746 logs.go:276] 1 containers: [f7822efda439]
	I0717 11:17:04.030910    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:17:04.046805    8746 logs.go:276] 4 containers: [697cc48d12e6 0a7fdec260c5 62d13acf4f90 a8fde36854fe]
	I0717 11:17:04.046890    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:17:04.057340    8746 logs.go:276] 1 containers: [f9843b067c9d]
	I0717 11:17:04.057405    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:17:04.067575    8746 logs.go:276] 1 containers: [1714ab36fb9f]
	I0717 11:17:04.067641    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:17:04.078149    8746 logs.go:276] 1 containers: [1e0385446fac]
	I0717 11:17:04.078217    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:17:04.090110    8746 logs.go:276] 0 containers: []
	W0717 11:17:04.090121    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:17:04.090183    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:17:04.100650    8746 logs.go:276] 1 containers: [307213bf7b75]
	I0717 11:17:04.100671    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:17:04.100676    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:17:04.134682    8746 logs.go:123] Gathering logs for kube-scheduler [f9843b067c9d] ...
	I0717 11:17:04.134691    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9843b067c9d"
	I0717 11:17:04.149491    8746 logs.go:123] Gathering logs for storage-provisioner [307213bf7b75] ...
	I0717 11:17:04.149503    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307213bf7b75"
	I0717 11:17:04.161094    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:17:04.161108    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:17:04.172857    8746 logs.go:123] Gathering logs for kube-apiserver [4671000ac890] ...
	I0717 11:17:04.172868    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4671000ac890"
	I0717 11:17:04.187645    8746 logs.go:123] Gathering logs for kube-controller-manager [1e0385446fac] ...
	I0717 11:17:04.187656    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e0385446fac"
	I0717 11:17:04.205442    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:17:04.205454    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:17:04.209974    8746 logs.go:123] Gathering logs for etcd [f7822efda439] ...
	I0717 11:17:04.209982    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7822efda439"
	I0717 11:17:04.225958    8746 logs.go:123] Gathering logs for coredns [a8fde36854fe] ...
	I0717 11:17:04.225973    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fde36854fe"
	I0717 11:17:04.237533    8746 logs.go:123] Gathering logs for kube-proxy [1714ab36fb9f] ...
	I0717 11:17:04.237547    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1714ab36fb9f"
	I0717 11:17:04.248776    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:17:04.248786    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:17:04.275023    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:17:04.275041    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:17:04.308888    8746 logs.go:123] Gathering logs for coredns [697cc48d12e6] ...
	I0717 11:17:04.308904    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697cc48d12e6"
	I0717 11:17:04.324660    8746 logs.go:123] Gathering logs for coredns [0a7fdec260c5] ...
	I0717 11:17:04.324670    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7fdec260c5"
	I0717 11:17:04.336266    8746 logs.go:123] Gathering logs for coredns [62d13acf4f90] ...
	I0717 11:17:04.336275    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d13acf4f90"
	I0717 11:17:06.850712    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:17:11.853076    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:17:11.853312    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:17:11.871207    8746 logs.go:276] 1 containers: [4671000ac890]
	I0717 11:17:11.871297    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:17:11.884330    8746 logs.go:276] 1 containers: [f7822efda439]
	I0717 11:17:11.884410    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:17:11.895536    8746 logs.go:276] 4 containers: [697cc48d12e6 0a7fdec260c5 62d13acf4f90 a8fde36854fe]
	I0717 11:17:11.895610    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:17:11.905609    8746 logs.go:276] 1 containers: [f9843b067c9d]
	I0717 11:17:11.905670    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:17:11.920558    8746 logs.go:276] 1 containers: [1714ab36fb9f]
	I0717 11:17:11.920626    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:17:11.934077    8746 logs.go:276] 1 containers: [1e0385446fac]
	I0717 11:17:11.934141    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:17:11.944306    8746 logs.go:276] 0 containers: []
	W0717 11:17:11.944319    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:17:11.944371    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:17:11.954479    8746 logs.go:276] 1 containers: [307213bf7b75]
	I0717 11:17:11.954496    8746 logs.go:123] Gathering logs for coredns [62d13acf4f90] ...
	I0717 11:17:11.954501    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d13acf4f90"
	I0717 11:17:11.966304    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:17:11.966314    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:17:11.971223    8746 logs.go:123] Gathering logs for coredns [697cc48d12e6] ...
	I0717 11:17:11.971233    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697cc48d12e6"
	I0717 11:17:11.982534    8746 logs.go:123] Gathering logs for kube-scheduler [f9843b067c9d] ...
	I0717 11:17:11.982544    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9843b067c9d"
	I0717 11:17:11.997590    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:17:11.997602    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:17:12.033637    8746 logs.go:123] Gathering logs for kube-apiserver [4671000ac890] ...
	I0717 11:17:12.033649    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4671000ac890"
	I0717 11:17:12.052307    8746 logs.go:123] Gathering logs for kube-controller-manager [1e0385446fac] ...
	I0717 11:17:12.052317    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e0385446fac"
	I0717 11:17:12.069245    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:17:12.069258    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:17:12.094370    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:17:12.094382    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:17:12.129809    8746 logs.go:123] Gathering logs for coredns [0a7fdec260c5] ...
	I0717 11:17:12.129824    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7fdec260c5"
	I0717 11:17:12.145660    8746 logs.go:123] Gathering logs for coredns [a8fde36854fe] ...
	I0717 11:17:12.145672    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fde36854fe"
	I0717 11:17:12.157535    8746 logs.go:123] Gathering logs for kube-proxy [1714ab36fb9f] ...
	I0717 11:17:12.157548    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1714ab36fb9f"
	I0717 11:17:12.169874    8746 logs.go:123] Gathering logs for storage-provisioner [307213bf7b75] ...
	I0717 11:17:12.169885    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307213bf7b75"
	I0717 11:17:12.182056    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:17:12.182073    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:17:12.194329    8746 logs.go:123] Gathering logs for etcd [f7822efda439] ...
	I0717 11:17:12.194340    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7822efda439"
	I0717 11:17:14.711538    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:17:19.713923    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:17:19.714156    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:17:19.734925    8746 logs.go:276] 1 containers: [4671000ac890]
	I0717 11:17:19.735022    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:17:19.749953    8746 logs.go:276] 1 containers: [f7822efda439]
	I0717 11:17:19.750027    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:17:19.762341    8746 logs.go:276] 4 containers: [697cc48d12e6 0a7fdec260c5 62d13acf4f90 a8fde36854fe]
	I0717 11:17:19.762410    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:17:19.773601    8746 logs.go:276] 1 containers: [f9843b067c9d]
	I0717 11:17:19.773669    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:17:19.784067    8746 logs.go:276] 1 containers: [1714ab36fb9f]
	I0717 11:17:19.784140    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:17:19.795039    8746 logs.go:276] 1 containers: [1e0385446fac]
	I0717 11:17:19.795103    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:17:19.805225    8746 logs.go:276] 0 containers: []
	W0717 11:17:19.805235    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:17:19.805293    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:17:19.825589    8746 logs.go:276] 1 containers: [307213bf7b75]
	I0717 11:17:19.825606    8746 logs.go:123] Gathering logs for coredns [697cc48d12e6] ...
	I0717 11:17:19.825612    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697cc48d12e6"
	I0717 11:17:19.838725    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:17:19.838739    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:17:19.865151    8746 logs.go:123] Gathering logs for kube-scheduler [f9843b067c9d] ...
	I0717 11:17:19.865160    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9843b067c9d"
	I0717 11:17:19.880227    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:17:19.880240    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:17:19.891847    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:17:19.891859    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:17:19.927570    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:17:19.927579    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:17:20.006149    8746 logs.go:123] Gathering logs for kube-apiserver [4671000ac890] ...
	I0717 11:17:20.006160    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4671000ac890"
	I0717 11:17:20.020513    8746 logs.go:123] Gathering logs for coredns [0a7fdec260c5] ...
	I0717 11:17:20.020523    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7fdec260c5"
	I0717 11:17:20.032557    8746 logs.go:123] Gathering logs for kube-proxy [1714ab36fb9f] ...
	I0717 11:17:20.032568    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1714ab36fb9f"
	I0717 11:17:20.045230    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:17:20.045244    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:17:20.049915    8746 logs.go:123] Gathering logs for etcd [f7822efda439] ...
	I0717 11:17:20.049924    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7822efda439"
	I0717 11:17:20.064473    8746 logs.go:123] Gathering logs for coredns [62d13acf4f90] ...
	I0717 11:17:20.064486    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d13acf4f90"
	I0717 11:17:20.080900    8746 logs.go:123] Gathering logs for coredns [a8fde36854fe] ...
	I0717 11:17:20.080910    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fde36854fe"
	I0717 11:17:20.095450    8746 logs.go:123] Gathering logs for kube-controller-manager [1e0385446fac] ...
	I0717 11:17:20.095460    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e0385446fac"
	I0717 11:17:20.113627    8746 logs.go:123] Gathering logs for storage-provisioner [307213bf7b75] ...
	I0717 11:17:20.113637    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307213bf7b75"
	I0717 11:17:22.625891    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:17:27.626102    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:17:27.626365    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:17:27.651294    8746 logs.go:276] 1 containers: [4671000ac890]
	I0717 11:17:27.651411    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:17:27.667994    8746 logs.go:276] 1 containers: [f7822efda439]
	I0717 11:17:27.668078    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:17:27.687284    8746 logs.go:276] 4 containers: [697cc48d12e6 0a7fdec260c5 62d13acf4f90 a8fde36854fe]
	I0717 11:17:27.687357    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:17:27.699349    8746 logs.go:276] 1 containers: [f9843b067c9d]
	I0717 11:17:27.699415    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:17:27.717877    8746 logs.go:276] 1 containers: [1714ab36fb9f]
	I0717 11:17:27.717945    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:17:27.728517    8746 logs.go:276] 1 containers: [1e0385446fac]
	I0717 11:17:27.728588    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:17:27.738552    8746 logs.go:276] 0 containers: []
	W0717 11:17:27.738563    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:17:27.738625    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:17:27.748889    8746 logs.go:276] 1 containers: [307213bf7b75]
	I0717 11:17:27.748905    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:17:27.748910    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:17:27.786055    8746 logs.go:123] Gathering logs for storage-provisioner [307213bf7b75] ...
	I0717 11:17:27.786067    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307213bf7b75"
	I0717 11:17:27.797905    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:17:27.797915    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:17:27.809826    8746 logs.go:123] Gathering logs for kube-controller-manager [1e0385446fac] ...
	I0717 11:17:27.809839    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e0385446fac"
	I0717 11:17:27.827897    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:17:27.827908    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:17:27.854729    8746 logs.go:123] Gathering logs for etcd [f7822efda439] ...
	I0717 11:17:27.854745    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7822efda439"
	I0717 11:17:27.868881    8746 logs.go:123] Gathering logs for coredns [697cc48d12e6] ...
	I0717 11:17:27.868892    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697cc48d12e6"
	I0717 11:17:27.880319    8746 logs.go:123] Gathering logs for coredns [0a7fdec260c5] ...
	I0717 11:17:27.880334    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7fdec260c5"
	I0717 11:17:27.892010    8746 logs.go:123] Gathering logs for kube-proxy [1714ab36fb9f] ...
	I0717 11:17:27.892027    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1714ab36fb9f"
	I0717 11:17:27.904409    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:17:27.904423    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:17:27.940065    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:17:27.940072    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:17:27.944694    8746 logs.go:123] Gathering logs for kube-apiserver [4671000ac890] ...
	I0717 11:17:27.944701    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4671000ac890"
	I0717 11:17:27.959701    8746 logs.go:123] Gathering logs for coredns [62d13acf4f90] ...
	I0717 11:17:27.959715    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d13acf4f90"
	I0717 11:17:27.971179    8746 logs.go:123] Gathering logs for coredns [a8fde36854fe] ...
	I0717 11:17:27.971193    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fde36854fe"
	I0717 11:17:27.982745    8746 logs.go:123] Gathering logs for kube-scheduler [f9843b067c9d] ...
	I0717 11:17:27.982755    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9843b067c9d"
	I0717 11:17:30.497755    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:17:35.498827    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:17:35.498923    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:17:35.509849    8746 logs.go:276] 1 containers: [4671000ac890]
	I0717 11:17:35.509916    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:17:35.521754    8746 logs.go:276] 1 containers: [f7822efda439]
	I0717 11:17:35.521832    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:17:35.533777    8746 logs.go:276] 4 containers: [697cc48d12e6 0a7fdec260c5 62d13acf4f90 a8fde36854fe]
	I0717 11:17:35.533871    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:17:35.545959    8746 logs.go:276] 1 containers: [f9843b067c9d]
	I0717 11:17:35.546045    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:17:35.557156    8746 logs.go:276] 1 containers: [1714ab36fb9f]
	I0717 11:17:35.557229    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:17:35.569207    8746 logs.go:276] 1 containers: [1e0385446fac]
	I0717 11:17:35.569273    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:17:35.580477    8746 logs.go:276] 0 containers: []
	W0717 11:17:35.580490    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:17:35.580549    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:17:35.591931    8746 logs.go:276] 1 containers: [307213bf7b75]
	I0717 11:17:35.591949    8746 logs.go:123] Gathering logs for storage-provisioner [307213bf7b75] ...
	I0717 11:17:35.591954    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307213bf7b75"
	I0717 11:17:35.604953    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:17:35.604964    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:17:35.617847    8746 logs.go:123] Gathering logs for coredns [697cc48d12e6] ...
	I0717 11:17:35.617861    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697cc48d12e6"
	I0717 11:17:35.636582    8746 logs.go:123] Gathering logs for coredns [0a7fdec260c5] ...
	I0717 11:17:35.636592    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7fdec260c5"
	I0717 11:17:35.649109    8746 logs.go:123] Gathering logs for kube-controller-manager [1e0385446fac] ...
	I0717 11:17:35.649121    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e0385446fac"
	I0717 11:17:35.671234    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:17:35.671246    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:17:35.710022    8746 logs.go:123] Gathering logs for coredns [62d13acf4f90] ...
	I0717 11:17:35.710038    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d13acf4f90"
	I0717 11:17:35.722701    8746 logs.go:123] Gathering logs for kube-scheduler [f9843b067c9d] ...
	I0717 11:17:35.722715    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9843b067c9d"
	I0717 11:17:35.749306    8746 logs.go:123] Gathering logs for kube-proxy [1714ab36fb9f] ...
	I0717 11:17:35.749317    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1714ab36fb9f"
	I0717 11:17:35.762430    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:17:35.762443    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:17:35.767046    8746 logs.go:123] Gathering logs for kube-apiserver [4671000ac890] ...
	I0717 11:17:35.767056    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4671000ac890"
	I0717 11:17:35.782087    8746 logs.go:123] Gathering logs for coredns [a8fde36854fe] ...
	I0717 11:17:35.782099    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fde36854fe"
	I0717 11:17:35.794797    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:17:35.794811    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:17:35.832208    8746 logs.go:123] Gathering logs for etcd [f7822efda439] ...
	I0717 11:17:35.832223    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7822efda439"
	I0717 11:17:35.847074    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:17:35.847086    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:17:38.373838    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:17:43.375561    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:17:43.375843    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:17:43.398302    8746 logs.go:276] 1 containers: [4671000ac890]
	I0717 11:17:43.398406    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:17:43.415783    8746 logs.go:276] 1 containers: [f7822efda439]
	I0717 11:17:43.415860    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:17:43.428347    8746 logs.go:276] 4 containers: [697cc48d12e6 0a7fdec260c5 62d13acf4f90 a8fde36854fe]
	I0717 11:17:43.428425    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:17:43.439738    8746 logs.go:276] 1 containers: [f9843b067c9d]
	I0717 11:17:43.439805    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:17:43.450704    8746 logs.go:276] 1 containers: [1714ab36fb9f]
	I0717 11:17:43.450780    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:17:43.460795    8746 logs.go:276] 1 containers: [1e0385446fac]
	I0717 11:17:43.460861    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:17:43.473504    8746 logs.go:276] 0 containers: []
	W0717 11:17:43.473517    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:17:43.473579    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:17:43.484361    8746 logs.go:276] 1 containers: [307213bf7b75]
	I0717 11:17:43.484379    8746 logs.go:123] Gathering logs for kube-scheduler [f9843b067c9d] ...
	I0717 11:17:43.484384    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9843b067c9d"
	I0717 11:17:43.502486    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:17:43.502497    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:17:43.514089    8746 logs.go:123] Gathering logs for kube-apiserver [4671000ac890] ...
	I0717 11:17:43.514101    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4671000ac890"
	I0717 11:17:43.532985    8746 logs.go:123] Gathering logs for coredns [0a7fdec260c5] ...
	I0717 11:17:43.532995    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7fdec260c5"
	I0717 11:17:43.544712    8746 logs.go:123] Gathering logs for coredns [a8fde36854fe] ...
	I0717 11:17:43.544722    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fde36854fe"
	I0717 11:17:43.557147    8746 logs.go:123] Gathering logs for coredns [697cc48d12e6] ...
	I0717 11:17:43.557159    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697cc48d12e6"
	I0717 11:17:43.571592    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:17:43.571605    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:17:43.597410    8746 logs.go:123] Gathering logs for etcd [f7822efda439] ...
	I0717 11:17:43.597419    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7822efda439"
	I0717 11:17:43.611320    8746 logs.go:123] Gathering logs for coredns [62d13acf4f90] ...
	I0717 11:17:43.611335    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d13acf4f90"
	I0717 11:17:43.627604    8746 logs.go:123] Gathering logs for kube-proxy [1714ab36fb9f] ...
	I0717 11:17:43.627613    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1714ab36fb9f"
	I0717 11:17:43.639944    8746 logs.go:123] Gathering logs for kube-controller-manager [1e0385446fac] ...
	I0717 11:17:43.639952    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e0385446fac"
	I0717 11:17:43.656908    8746 logs.go:123] Gathering logs for storage-provisioner [307213bf7b75] ...
	I0717 11:17:43.656919    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307213bf7b75"
	I0717 11:17:43.668861    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:17:43.668872    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:17:43.704863    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:17:43.704872    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:17:43.709194    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:17:43.709203    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:17:46.245580    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:17:51.247600    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:17:51.247750    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:17:51.261211    8746 logs.go:276] 1 containers: [4671000ac890]
	I0717 11:17:51.261294    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:17:51.272706    8746 logs.go:276] 1 containers: [f7822efda439]
	I0717 11:17:51.272767    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:17:51.286978    8746 logs.go:276] 4 containers: [697cc48d12e6 0a7fdec260c5 62d13acf4f90 a8fde36854fe]
	I0717 11:17:51.287047    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:17:51.297558    8746 logs.go:276] 1 containers: [f9843b067c9d]
	I0717 11:17:51.297620    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:17:51.308045    8746 logs.go:276] 1 containers: [1714ab36fb9f]
	I0717 11:17:51.308107    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:17:51.318832    8746 logs.go:276] 1 containers: [1e0385446fac]
	I0717 11:17:51.318907    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:17:51.329127    8746 logs.go:276] 0 containers: []
	W0717 11:17:51.329139    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:17:51.329197    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:17:51.339704    8746 logs.go:276] 1 containers: [307213bf7b75]
	I0717 11:17:51.339721    8746 logs.go:123] Gathering logs for coredns [a8fde36854fe] ...
	I0717 11:17:51.339727    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fde36854fe"
	I0717 11:17:51.353178    8746 logs.go:123] Gathering logs for storage-provisioner [307213bf7b75] ...
	I0717 11:17:51.353190    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307213bf7b75"
	I0717 11:17:51.368614    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:17:51.368626    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:17:51.380708    8746 logs.go:123] Gathering logs for kube-apiserver [4671000ac890] ...
	I0717 11:17:51.380720    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4671000ac890"
	I0717 11:17:51.396617    8746 logs.go:123] Gathering logs for coredns [697cc48d12e6] ...
	I0717 11:17:51.396630    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697cc48d12e6"
	I0717 11:17:51.409150    8746 logs.go:123] Gathering logs for kube-proxy [1714ab36fb9f] ...
	I0717 11:17:51.409161    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1714ab36fb9f"
	I0717 11:17:51.421003    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:17:51.421014    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:17:51.445018    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:17:51.445031    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:17:51.479280    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:17:51.479294    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:17:51.516162    8746 logs.go:123] Gathering logs for etcd [f7822efda439] ...
	I0717 11:17:51.516174    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7822efda439"
	I0717 11:17:51.539833    8746 logs.go:123] Gathering logs for coredns [62d13acf4f90] ...
	I0717 11:17:51.539847    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d13acf4f90"
	I0717 11:17:51.551921    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:17:51.551931    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:17:51.556663    8746 logs.go:123] Gathering logs for coredns [0a7fdec260c5] ...
	I0717 11:17:51.556669    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7fdec260c5"
	I0717 11:17:51.569362    8746 logs.go:123] Gathering logs for kube-scheduler [f9843b067c9d] ...
	I0717 11:17:51.569372    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9843b067c9d"
	I0717 11:17:51.586629    8746 logs.go:123] Gathering logs for kube-controller-manager [1e0385446fac] ...
	I0717 11:17:51.586640    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e0385446fac"
	I0717 11:17:54.106575    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:17:59.108621    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:17:59.108884    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:17:59.131257    8746 logs.go:276] 1 containers: [4671000ac890]
	I0717 11:17:59.131353    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:17:59.146463    8746 logs.go:276] 1 containers: [f7822efda439]
	I0717 11:17:59.146546    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:17:59.159651    8746 logs.go:276] 4 containers: [697cc48d12e6 0a7fdec260c5 62d13acf4f90 a8fde36854fe]
	I0717 11:17:59.159723    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:17:59.170491    8746 logs.go:276] 1 containers: [f9843b067c9d]
	I0717 11:17:59.170565    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:17:59.181464    8746 logs.go:276] 1 containers: [1714ab36fb9f]
	I0717 11:17:59.181535    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:17:59.193884    8746 logs.go:276] 1 containers: [1e0385446fac]
	I0717 11:17:59.193951    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:17:59.204474    8746 logs.go:276] 0 containers: []
	W0717 11:17:59.204486    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:17:59.204549    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:17:59.215010    8746 logs.go:276] 1 containers: [307213bf7b75]
	I0717 11:17:59.215030    8746 logs.go:123] Gathering logs for etcd [f7822efda439] ...
	I0717 11:17:59.215035    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7822efda439"
	I0717 11:17:59.229141    8746 logs.go:123] Gathering logs for coredns [0a7fdec260c5] ...
	I0717 11:17:59.229155    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7fdec260c5"
	I0717 11:17:59.240351    8746 logs.go:123] Gathering logs for coredns [a8fde36854fe] ...
	I0717 11:17:59.240363    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fde36854fe"
	I0717 11:17:59.252441    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:17:59.252453    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:17:59.286810    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:17:59.286821    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:17:59.322799    8746 logs.go:123] Gathering logs for kube-apiserver [4671000ac890] ...
	I0717 11:17:59.322809    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4671000ac890"
	I0717 11:17:59.337381    8746 logs.go:123] Gathering logs for kube-controller-manager [1e0385446fac] ...
	I0717 11:17:59.337392    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e0385446fac"
	I0717 11:17:59.354640    8746 logs.go:123] Gathering logs for coredns [697cc48d12e6] ...
	I0717 11:17:59.354649    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697cc48d12e6"
	I0717 11:17:59.367478    8746 logs.go:123] Gathering logs for storage-provisioner [307213bf7b75] ...
	I0717 11:17:59.367489    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307213bf7b75"
	I0717 11:17:59.379000    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:17:59.379010    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:17:59.404098    8746 logs.go:123] Gathering logs for kube-proxy [1714ab36fb9f] ...
	I0717 11:17:59.404106    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1714ab36fb9f"
	I0717 11:17:59.415757    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:17:59.415766    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:17:59.427695    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:17:59.427709    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:17:59.432031    8746 logs.go:123] Gathering logs for coredns [62d13acf4f90] ...
	I0717 11:17:59.432038    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d13acf4f90"
	I0717 11:17:59.444051    8746 logs.go:123] Gathering logs for kube-scheduler [f9843b067c9d] ...
	I0717 11:17:59.444063    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9843b067c9d"
	I0717 11:18:01.960965    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:18:06.963124    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:18:06.963367    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:18:06.989325    8746 logs.go:276] 1 containers: [4671000ac890]
	I0717 11:18:06.989452    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:18:07.014353    8746 logs.go:276] 1 containers: [f7822efda439]
	I0717 11:18:07.014437    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:18:07.027566    8746 logs.go:276] 4 containers: [697cc48d12e6 0a7fdec260c5 62d13acf4f90 a8fde36854fe]
	I0717 11:18:07.027639    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:18:07.039703    8746 logs.go:276] 1 containers: [f9843b067c9d]
	I0717 11:18:07.039776    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:18:07.050389    8746 logs.go:276] 1 containers: [1714ab36fb9f]
	I0717 11:18:07.050447    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:18:07.060924    8746 logs.go:276] 1 containers: [1e0385446fac]
	I0717 11:18:07.060981    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:18:07.070699    8746 logs.go:276] 0 containers: []
	W0717 11:18:07.070709    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:18:07.070764    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:18:07.083312    8746 logs.go:276] 1 containers: [307213bf7b75]
	I0717 11:18:07.083332    8746 logs.go:123] Gathering logs for coredns [0a7fdec260c5] ...
	I0717 11:18:07.083338    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7fdec260c5"
	I0717 11:18:07.097774    8746 logs.go:123] Gathering logs for storage-provisioner [307213bf7b75] ...
	I0717 11:18:07.097785    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307213bf7b75"
	I0717 11:18:07.109980    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:18:07.109991    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:18:07.121800    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:18:07.121812    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:18:07.126019    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:18:07.126029    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:18:07.165256    8746 logs.go:123] Gathering logs for coredns [a8fde36854fe] ...
	I0717 11:18:07.165267    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fde36854fe"
	I0717 11:18:07.177853    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:18:07.177863    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:18:07.203571    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:18:07.203580    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:18:07.238772    8746 logs.go:123] Gathering logs for kube-proxy [1714ab36fb9f] ...
	I0717 11:18:07.238779    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1714ab36fb9f"
	I0717 11:18:07.253938    8746 logs.go:123] Gathering logs for coredns [697cc48d12e6] ...
	I0717 11:18:07.253948    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697cc48d12e6"
	I0717 11:18:07.265537    8746 logs.go:123] Gathering logs for coredns [62d13acf4f90] ...
	I0717 11:18:07.265549    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d13acf4f90"
	I0717 11:18:07.276988    8746 logs.go:123] Gathering logs for kube-scheduler [f9843b067c9d] ...
	I0717 11:18:07.276998    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9843b067c9d"
	I0717 11:18:07.292155    8746 logs.go:123] Gathering logs for kube-controller-manager [1e0385446fac] ...
	I0717 11:18:07.292166    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e0385446fac"
	I0717 11:18:07.312047    8746 logs.go:123] Gathering logs for kube-apiserver [4671000ac890] ...
	I0717 11:18:07.312061    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4671000ac890"
	I0717 11:18:07.330955    8746 logs.go:123] Gathering logs for etcd [f7822efda439] ...
	I0717 11:18:07.330965    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7822efda439"
	I0717 11:18:09.851228    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:18:14.853416    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:18:14.853587    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:18:14.866623    8746 logs.go:276] 1 containers: [4671000ac890]
	I0717 11:18:14.866698    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:18:14.877716    8746 logs.go:276] 1 containers: [f7822efda439]
	I0717 11:18:14.877780    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:18:14.896495    8746 logs.go:276] 4 containers: [697cc48d12e6 0a7fdec260c5 62d13acf4f90 a8fde36854fe]
	I0717 11:18:14.896572    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:18:14.911079    8746 logs.go:276] 1 containers: [f9843b067c9d]
	I0717 11:18:14.911155    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:18:14.921682    8746 logs.go:276] 1 containers: [1714ab36fb9f]
	I0717 11:18:14.921756    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:18:14.931584    8746 logs.go:276] 1 containers: [1e0385446fac]
	I0717 11:18:14.931653    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:18:14.944065    8746 logs.go:276] 0 containers: []
	W0717 11:18:14.944074    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:18:14.944126    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:18:14.954821    8746 logs.go:276] 1 containers: [307213bf7b75]
	I0717 11:18:14.954836    8746 logs.go:123] Gathering logs for coredns [a8fde36854fe] ...
	I0717 11:18:14.954842    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fde36854fe"
	I0717 11:18:14.966763    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:18:14.966776    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:18:14.992869    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:18:14.992886    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:18:14.997132    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:18:14.997139    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:18:15.032768    8746 logs.go:123] Gathering logs for kube-apiserver [4671000ac890] ...
	I0717 11:18:15.032784    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4671000ac890"
	I0717 11:18:15.047555    8746 logs.go:123] Gathering logs for coredns [697cc48d12e6] ...
	I0717 11:18:15.047568    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697cc48d12e6"
	I0717 11:18:15.061291    8746 logs.go:123] Gathering logs for coredns [0a7fdec260c5] ...
	I0717 11:18:15.061305    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7fdec260c5"
	I0717 11:18:15.072635    8746 logs.go:123] Gathering logs for coredns [62d13acf4f90] ...
	I0717 11:18:15.072645    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d13acf4f90"
	I0717 11:18:15.086480    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:18:15.086491    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:18:15.098330    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:18:15.098341    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:18:15.134009    8746 logs.go:123] Gathering logs for kube-proxy [1714ab36fb9f] ...
	I0717 11:18:15.134018    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1714ab36fb9f"
	I0717 11:18:15.146612    8746 logs.go:123] Gathering logs for storage-provisioner [307213bf7b75] ...
	I0717 11:18:15.146626    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307213bf7b75"
	I0717 11:18:15.158236    8746 logs.go:123] Gathering logs for etcd [f7822efda439] ...
	I0717 11:18:15.158251    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7822efda439"
	I0717 11:18:15.172799    8746 logs.go:123] Gathering logs for kube-scheduler [f9843b067c9d] ...
	I0717 11:18:15.172810    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9843b067c9d"
	I0717 11:18:15.189186    8746 logs.go:123] Gathering logs for kube-controller-manager [1e0385446fac] ...
	I0717 11:18:15.189195    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e0385446fac"
	I0717 11:18:17.706679    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:18:22.707427    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:18:22.707595    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:18:22.727640    8746 logs.go:276] 1 containers: [4671000ac890]
	I0717 11:18:22.727757    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:18:22.742109    8746 logs.go:276] 1 containers: [f7822efda439]
	I0717 11:18:22.742176    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:18:22.758891    8746 logs.go:276] 4 containers: [697cc48d12e6 0a7fdec260c5 62d13acf4f90 a8fde36854fe]
	I0717 11:18:22.758958    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:18:22.769506    8746 logs.go:276] 1 containers: [f9843b067c9d]
	I0717 11:18:22.769571    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:18:22.780037    8746 logs.go:276] 1 containers: [1714ab36fb9f]
	I0717 11:18:22.780103    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:18:22.790341    8746 logs.go:276] 1 containers: [1e0385446fac]
	I0717 11:18:22.790398    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:18:22.800339    8746 logs.go:276] 0 containers: []
	W0717 11:18:22.800350    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:18:22.800405    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:18:22.814800    8746 logs.go:276] 1 containers: [307213bf7b75]
	I0717 11:18:22.814818    8746 logs.go:123] Gathering logs for etcd [f7822efda439] ...
	I0717 11:18:22.814823    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7822efda439"
	I0717 11:18:22.828701    8746 logs.go:123] Gathering logs for coredns [697cc48d12e6] ...
	I0717 11:18:22.828712    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697cc48d12e6"
	I0717 11:18:22.840378    8746 logs.go:123] Gathering logs for storage-provisioner [307213bf7b75] ...
	I0717 11:18:22.840387    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307213bf7b75"
	I0717 11:18:22.852108    8746 logs.go:123] Gathering logs for kube-controller-manager [1e0385446fac] ...
	I0717 11:18:22.852121    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e0385446fac"
	I0717 11:18:22.869657    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:18:22.869668    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:18:22.882414    8746 logs.go:123] Gathering logs for kube-apiserver [4671000ac890] ...
	I0717 11:18:22.882425    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4671000ac890"
	I0717 11:18:22.897649    8746 logs.go:123] Gathering logs for coredns [0a7fdec260c5] ...
	I0717 11:18:22.897669    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7fdec260c5"
	I0717 11:18:22.911821    8746 logs.go:123] Gathering logs for coredns [62d13acf4f90] ...
	I0717 11:18:22.911834    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d13acf4f90"
	I0717 11:18:22.923553    8746 logs.go:123] Gathering logs for kube-proxy [1714ab36fb9f] ...
	I0717 11:18:22.923564    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1714ab36fb9f"
	I0717 11:18:22.935454    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:18:22.935463    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:18:22.958780    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:18:22.958789    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:18:22.991947    8746 logs.go:123] Gathering logs for coredns [a8fde36854fe] ...
	I0717 11:18:22.991956    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fde36854fe"
	I0717 11:18:23.004498    8746 logs.go:123] Gathering logs for kube-scheduler [f9843b067c9d] ...
	I0717 11:18:23.004508    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9843b067c9d"
	I0717 11:18:23.020228    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:18:23.020237    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:18:23.024539    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:18:23.024545    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:18:25.564796    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:18:30.567613    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:18:30.568051    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:18:30.607979    8746 logs.go:276] 1 containers: [4671000ac890]
	I0717 11:18:30.608109    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:18:30.629899    8746 logs.go:276] 1 containers: [f7822efda439]
	I0717 11:18:30.630018    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:18:30.645474    8746 logs.go:276] 4 containers: [697cc48d12e6 0a7fdec260c5 62d13acf4f90 a8fde36854fe]
	I0717 11:18:30.645550    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:18:30.658043    8746 logs.go:276] 1 containers: [f9843b067c9d]
	I0717 11:18:30.658116    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:18:30.669526    8746 logs.go:276] 1 containers: [1714ab36fb9f]
	I0717 11:18:30.669591    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:18:30.680939    8746 logs.go:276] 1 containers: [1e0385446fac]
	I0717 11:18:30.681014    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:18:30.694654    8746 logs.go:276] 0 containers: []
	W0717 11:18:30.694665    8746 logs.go:278] No container was found matching "kindnet"
	I0717 11:18:30.694730    8746 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:18:30.706293    8746 logs.go:276] 1 containers: [307213bf7b75]
	I0717 11:18:30.706311    8746 logs.go:123] Gathering logs for kubelet ...
	I0717 11:18:30.706316    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:18:30.742505    8746 logs.go:123] Gathering logs for kube-apiserver [4671000ac890] ...
	I0717 11:18:30.742518    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4671000ac890"
	I0717 11:18:30.757295    8746 logs.go:123] Gathering logs for etcd [f7822efda439] ...
	I0717 11:18:30.757307    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7822efda439"
	I0717 11:18:30.772105    8746 logs.go:123] Gathering logs for coredns [62d13acf4f90] ...
	I0717 11:18:30.772118    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d13acf4f90"
	I0717 11:18:30.784154    8746 logs.go:123] Gathering logs for kube-proxy [1714ab36fb9f] ...
	I0717 11:18:30.784169    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1714ab36fb9f"
	I0717 11:18:30.796346    8746 logs.go:123] Gathering logs for kube-controller-manager [1e0385446fac] ...
	I0717 11:18:30.796356    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e0385446fac"
	I0717 11:18:30.814639    8746 logs.go:123] Gathering logs for storage-provisioner [307213bf7b75] ...
	I0717 11:18:30.814650    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307213bf7b75"
	I0717 11:18:30.826620    8746 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:18:30.826634    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:18:30.863900    8746 logs.go:123] Gathering logs for coredns [0a7fdec260c5] ...
	I0717 11:18:30.863911    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7fdec260c5"
	I0717 11:18:30.876111    8746 logs.go:123] Gathering logs for kube-scheduler [f9843b067c9d] ...
	I0717 11:18:30.876123    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9843b067c9d"
	I0717 11:18:30.892476    8746 logs.go:123] Gathering logs for Docker ...
	I0717 11:18:30.892487    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:18:30.917638    8746 logs.go:123] Gathering logs for dmesg ...
	I0717 11:18:30.917647    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:18:30.921792    8746 logs.go:123] Gathering logs for coredns [697cc48d12e6] ...
	I0717 11:18:30.921802    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697cc48d12e6"
	I0717 11:18:30.933312    8746 logs.go:123] Gathering logs for coredns [a8fde36854fe] ...
	I0717 11:18:30.933324    8746 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fde36854fe"
	I0717 11:18:30.944959    8746 logs.go:123] Gathering logs for container status ...
	I0717 11:18:30.944974    8746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:18:33.458629    8746 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:18:38.461250    8746 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:18:38.465397    8746 out.go:177] 
	W0717 11:18:38.469381    8746 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0717 11:18:38.469403    8746 out.go:239] * 
	* 
	W0717 11:18:38.470844    8746 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:18:38.481322    8746 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-058000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (577.79s)

                                                
                                    
x
+
TestPause/serial/Start (9.87s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-187000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-187000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.8433125s)

                                                
                                                
-- stdout --
	* [pause-187000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-187000" primary control-plane node in "pause-187000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-187000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-187000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-187000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-187000 -n pause-187000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-187000 -n pause-187000: exit status 7 (30.9495ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-187000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-813000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-813000 --driver=qemu2 : exit status 80 (9.936613541s)

                                                
                                                
-- stdout --
	* [NoKubernetes-813000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-813000" primary control-plane node in "NoKubernetes-813000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-813000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-813000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-813000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-813000 -n NoKubernetes-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-813000 -n NoKubernetes-813000: exit status 7 (43.164125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-813000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-813000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-813000 --no-kubernetes --driver=qemu2 : exit status 80 (5.230170917s)

                                                
                                                
-- stdout --
	* [NoKubernetes-813000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-813000
	* Restarting existing qemu2 VM for "NoKubernetes-813000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-813000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-813000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-813000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-813000 -n NoKubernetes-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-813000 -n NoKubernetes-813000: exit status 7 (30.2995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-813000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-813000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-813000 --no-kubernetes --driver=qemu2 : exit status 80 (5.23597225s)

                                                
                                                
-- stdout --
	* [NoKubernetes-813000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-813000
	* Restarting existing qemu2 VM for "NoKubernetes-813000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-813000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-813000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-813000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-813000 -n NoKubernetes-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-813000 -n NoKubernetes-813000: exit status 7 (33.460458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-813000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (279.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 profile list: signal: killed (4m39.35682625s)
no_kubernetes_test.go:171: Profile list failed : "out/minikube-darwin-arm64 profile list" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-813000 -n NoKubernetes-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-813000 -n NoKubernetes-813000: exit status 7 (30.555708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-813000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/ProfileList (279.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-306000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-306000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.927855834s)

                                                
                                                
-- stdout --
	* [auto-306000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-306000" primary control-plane node in "auto-306000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-306000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:21:11.264276    9024 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:21:11.264425    9024 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:21:11.264430    9024 out.go:304] Setting ErrFile to fd 2...
	I0717 11:21:11.264432    9024 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:21:11.264575    9024 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:21:11.265722    9024 out.go:298] Setting JSON to false
	I0717 11:21:11.282327    9024 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6643,"bootTime":1721233828,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 11:21:11.282415    9024 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:21:11.288180    9024 out.go:177] * [auto-306000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:21:11.294181    9024 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 11:21:11.294201    9024 notify.go:220] Checking for updates...
	I0717 11:21:11.300978    9024 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:21:11.304026    9024 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:21:11.307043    9024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:21:11.308195    9024 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 11:21:11.311045    9024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:21:11.314423    9024 config.go:182] Loaded profile config "NoKubernetes-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0717 11:21:11.314496    9024 config.go:182] Loaded profile config "multinode-931000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:21:11.314555    9024 config.go:182] Loaded profile config "pause-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:21:11.314613    9024 config.go:182] Loaded profile config "running-upgrade-891000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:21:11.314665    9024 config.go:182] Loaded profile config "stopped-upgrade-058000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:21:11.314714    9024 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:21:11.318856    9024 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:21:11.326025    9024 start.go:297] selected driver: qemu2
	I0717 11:21:11.326030    9024 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:21:11.326036    9024 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:21:11.328339    9024 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:21:11.331045    9024 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:21:11.334083    9024 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:21:11.334111    9024 cni.go:84] Creating CNI manager for ""
	I0717 11:21:11.334119    9024 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:21:11.334122    9024 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 11:21:11.334162    9024 start.go:340] cluster config:
	{Name:auto-306000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:auto-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:21:11.337751    9024 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:21:11.344996    9024 out.go:177] * Starting "auto-306000" primary control-plane node in "auto-306000" cluster
	I0717 11:21:11.349058    9024 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:21:11.349074    9024 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:21:11.349087    9024 cache.go:56] Caching tarball of preloaded images
	I0717 11:21:11.349142    9024 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:21:11.349147    9024 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:21:11.349213    9024 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/auto-306000/config.json ...
	I0717 11:21:11.349225    9024 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/auto-306000/config.json: {Name:mk4b06ffd141db72727f5a7bd21cdb582c684040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:21:11.349842    9024 start.go:360] acquireMachinesLock for auto-306000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:21:11.349872    9024 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "auto-306000"
	I0717 11:21:11.349881    9024 start.go:93] Provisioning new machine with config: &{Name:auto-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:auto-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:21:11.349928    9024 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:21:11.357041    9024 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:21:11.372770    9024 start.go:159] libmachine.API.Create for "auto-306000" (driver="qemu2")
	I0717 11:21:11.372803    9024 client.go:168] LocalClient.Create starting
	I0717 11:21:11.372865    9024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:21:11.372896    9024 main.go:141] libmachine: Decoding PEM data...
	I0717 11:21:11.372907    9024 main.go:141] libmachine: Parsing certificate...
	I0717 11:21:11.372944    9024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:21:11.372970    9024 main.go:141] libmachine: Decoding PEM data...
	I0717 11:21:11.372983    9024 main.go:141] libmachine: Parsing certificate...
	I0717 11:21:11.373348    9024 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:21:11.725957    9024 main.go:141] libmachine: Creating SSH key...
	I0717 11:21:11.755557    9024 main.go:141] libmachine: Creating Disk image...
	I0717 11:21:11.755562    9024 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:21:11.755735    9024 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/auto-306000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/auto-306000/disk.qcow2
	I0717 11:21:11.770095    9024 main.go:141] libmachine: STDOUT: 
	I0717 11:21:11.770141    9024 main.go:141] libmachine: STDERR: 
	I0717 11:21:11.770209    9024 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/auto-306000/disk.qcow2 +20000M
	I0717 11:21:11.778242    9024 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:21:11.778256    9024 main.go:141] libmachine: STDERR: 
	I0717 11:21:11.778272    9024 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/auto-306000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/auto-306000/disk.qcow2
	I0717 11:21:11.778277    9024 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:21:11.778290    9024 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:21:11.778316    9024 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/auto-306000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/auto-306000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/auto-306000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:97:66:a3:5a:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/auto-306000/disk.qcow2
	I0717 11:21:11.780393    9024 main.go:141] libmachine: STDOUT: 
	I0717 11:21:11.780411    9024 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:21:11.780429    9024 client.go:171] duration metric: took 407.622417ms to LocalClient.Create
	I0717 11:21:13.782622    9024 start.go:128] duration metric: took 2.432667s to createHost
	I0717 11:21:13.782676    9024 start.go:83] releasing machines lock for "auto-306000", held for 2.432799708s
	W0717 11:21:13.782709    9024 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:21:13.793018    9024 out.go:177] * Deleting "auto-306000" in qemu2 ...
	W0717 11:21:13.816140    9024 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:21:13.816156    9024 start.go:729] Will try again in 5 seconds ...
	I0717 11:21:18.818331    9024 start.go:360] acquireMachinesLock for auto-306000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:21:18.818613    9024 start.go:364] duration metric: took 215.209µs to acquireMachinesLock for "auto-306000"
	I0717 11:21:18.818678    9024 start.go:93] Provisioning new machine with config: &{Name:auto-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:auto-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:21:18.818775    9024 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:21:18.827102    9024 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:21:18.860378    9024 start.go:159] libmachine.API.Create for "auto-306000" (driver="qemu2")
	I0717 11:21:18.860431    9024 client.go:168] LocalClient.Create starting
	I0717 11:21:18.860530    9024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:21:18.860579    9024 main.go:141] libmachine: Decoding PEM data...
	I0717 11:21:18.860591    9024 main.go:141] libmachine: Parsing certificate...
	I0717 11:21:18.860634    9024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:21:18.860665    9024 main.go:141] libmachine: Decoding PEM data...
	I0717 11:21:18.860673    9024 main.go:141] libmachine: Parsing certificate...
	I0717 11:21:18.861039    9024 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:21:19.047826    9024 main.go:141] libmachine: Creating SSH key...
	I0717 11:21:19.097692    9024 main.go:141] libmachine: Creating Disk image...
	I0717 11:21:19.097699    9024 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:21:19.097908    9024 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/auto-306000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/auto-306000/disk.qcow2
	I0717 11:21:19.107414    9024 main.go:141] libmachine: STDOUT: 
	I0717 11:21:19.107435    9024 main.go:141] libmachine: STDERR: 
	I0717 11:21:19.107485    9024 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/auto-306000/disk.qcow2 +20000M
	I0717 11:21:19.115629    9024 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:21:19.115642    9024 main.go:141] libmachine: STDERR: 
	I0717 11:21:19.115661    9024 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/auto-306000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/auto-306000/disk.qcow2
	I0717 11:21:19.115665    9024 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:21:19.115678    9024 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:21:19.115703    9024 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/auto-306000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/auto-306000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/auto-306000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:ba:ae:eb:e0:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/auto-306000/disk.qcow2
	I0717 11:21:19.117440    9024 main.go:141] libmachine: STDOUT: 
	I0717 11:21:19.117456    9024 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:21:19.117467    9024 client.go:171] duration metric: took 257.0305ms to LocalClient.Create
	I0717 11:21:21.119575    9024 start.go:128] duration metric: took 2.300784667s to createHost
	I0717 11:21:21.119632    9024 start.go:83] releasing machines lock for "auto-306000", held for 2.301009875s
	W0717 11:21:21.119826    9024 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-306000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-306000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:21:21.128153    9024 out.go:177] 
	W0717 11:21:21.135171    9024 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:21:21.135190    9024 out.go:239] * 
	* 
	W0717 11:21:21.136106    9024 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:21:21.145113    9024 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-306000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-306000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.981473584s)

                                                
                                                
-- stdout --
	* [calico-306000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-306000" primary control-plane node in "calico-306000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-306000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:23:23.318535    9151 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:23:23.318701    9151 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:23:23.318705    9151 out.go:304] Setting ErrFile to fd 2...
	I0717 11:23:23.318707    9151 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:23:23.318860    9151 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:23:23.320061    9151 out.go:298] Setting JSON to false
	I0717 11:23:23.339515    9151 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6775,"bootTime":1721233828,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 11:23:23.339590    9151 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:23:23.343794    9151 out.go:177] * [calico-306000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:23:23.350785    9151 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 11:23:23.350853    9151 notify.go:220] Checking for updates...
	I0717 11:23:23.357704    9151 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:23:23.360686    9151 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:23:23.367654    9151 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:23:23.370779    9151 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 11:23:23.377738    9151 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:23:23.382044    9151 config.go:182] Loaded profile config "NoKubernetes-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0717 11:23:23.382108    9151 config.go:182] Loaded profile config "auto-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:23:23.382166    9151 config.go:182] Loaded profile config "multinode-931000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:23:23.382225    9151 config.go:182] Loaded profile config "pause-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:23:23.382278    9151 config.go:182] Loaded profile config "running-upgrade-891000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:23:23.382333    9151 config.go:182] Loaded profile config "stopped-upgrade-058000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:23:23.382371    9151 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:23:23.386692    9151 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:23:23.393773    9151 start.go:297] selected driver: qemu2
	I0717 11:23:23.393781    9151 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:23:23.393788    9151 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:23:23.396622    9151 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:23:23.399708    9151 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:23:23.402786    9151 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:23:23.402818    9151 cni.go:84] Creating CNI manager for "calico"
	I0717 11:23:23.402823    9151 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0717 11:23:23.402858    9151 start.go:340] cluster config:
	{Name:calico-306000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:calico-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:23:23.406875    9151 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:23:23.414643    9151 out.go:177] * Starting "calico-306000" primary control-plane node in "calico-306000" cluster
	I0717 11:23:23.418713    9151 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:23:23.418728    9151 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:23:23.418737    9151 cache.go:56] Caching tarball of preloaded images
	I0717 11:23:23.418792    9151 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:23:23.418798    9151 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:23:23.418855    9151 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/calico-306000/config.json ...
	I0717 11:23:23.418867    9151 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/calico-306000/config.json: {Name:mkc9ecf1ec00e34a0ee8118c23f9a6a1f961df4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:23:23.419216    9151 start.go:360] acquireMachinesLock for calico-306000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:23:23.419247    9151 start.go:364] duration metric: took 26.167µs to acquireMachinesLock for "calico-306000"
	I0717 11:23:23.419256    9151 start.go:93] Provisioning new machine with config: &{Name:calico-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:calico-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:23:23.419289    9151 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:23:23.427705    9151 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:23:23.442806    9151 start.go:159] libmachine.API.Create for "calico-306000" (driver="qemu2")
	I0717 11:23:23.442837    9151 client.go:168] LocalClient.Create starting
	I0717 11:23:23.442898    9151 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:23:23.442928    9151 main.go:141] libmachine: Decoding PEM data...
	I0717 11:23:23.442937    9151 main.go:141] libmachine: Parsing certificate...
	I0717 11:23:23.442976    9151 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:23:23.442999    9151 main.go:141] libmachine: Decoding PEM data...
	I0717 11:23:23.443009    9151 main.go:141] libmachine: Parsing certificate...
	I0717 11:23:23.443426    9151 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:23:23.704922    9151 main.go:141] libmachine: Creating SSH key...
	I0717 11:23:23.887424    9151 main.go:141] libmachine: Creating Disk image...
	I0717 11:23:23.887437    9151 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:23:23.887665    9151 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/calico-306000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/calico-306000/disk.qcow2
	I0717 11:23:23.900142    9151 main.go:141] libmachine: STDOUT: 
	I0717 11:23:23.900164    9151 main.go:141] libmachine: STDERR: 
	I0717 11:23:23.900211    9151 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/calico-306000/disk.qcow2 +20000M
	I0717 11:23:23.908318    9151 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:23:23.908335    9151 main.go:141] libmachine: STDERR: 
	I0717 11:23:23.908355    9151 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/calico-306000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/calico-306000/disk.qcow2
	I0717 11:23:23.908358    9151 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:23:23.908369    9151 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:23:23.908404    9151 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/calico-306000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/calico-306000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/calico-306000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:3f:9a:24:0e:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/calico-306000/disk.qcow2
	I0717 11:23:23.910157    9151 main.go:141] libmachine: STDOUT: 
	I0717 11:23:23.910171    9151 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:23:23.910189    9151 client.go:171] duration metric: took 467.348917ms to LocalClient.Create
	I0717 11:23:25.912300    9151 start.go:128] duration metric: took 2.493001458s to createHost
	I0717 11:23:25.912367    9151 start.go:83] releasing machines lock for "calico-306000", held for 2.493107958s
	W0717 11:23:25.912398    9151 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:23:25.920489    9151 out.go:177] * Deleting "calico-306000" in qemu2 ...
	W0717 11:23:25.940020    9151 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:23:25.940036    9151 start.go:729] Will try again in 5 seconds ...
	I0717 11:23:30.942183    9151 start.go:360] acquireMachinesLock for calico-306000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:23:30.942469    9151 start.go:364] duration metric: took 232.083µs to acquireMachinesLock for "calico-306000"
	I0717 11:23:30.942532    9151 start.go:93] Provisioning new machine with config: &{Name:calico-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:calico-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:23:30.942621    9151 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:23:30.951789    9151 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:23:30.982884    9151 start.go:159] libmachine.API.Create for "calico-306000" (driver="qemu2")
	I0717 11:23:30.982935    9151 client.go:168] LocalClient.Create starting
	I0717 11:23:30.983023    9151 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:23:30.983072    9151 main.go:141] libmachine: Decoding PEM data...
	I0717 11:23:30.983085    9151 main.go:141] libmachine: Parsing certificate...
	I0717 11:23:30.983130    9151 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:23:30.983165    9151 main.go:141] libmachine: Decoding PEM data...
	I0717 11:23:30.983173    9151 main.go:141] libmachine: Parsing certificate...
	I0717 11:23:30.983606    9151 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:23:31.126409    9151 main.go:141] libmachine: Creating SSH key...
	I0717 11:23:31.199692    9151 main.go:141] libmachine: Creating Disk image...
	I0717 11:23:31.199701    9151 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:23:31.199954    9151 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/calico-306000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/calico-306000/disk.qcow2
	I0717 11:23:31.209797    9151 main.go:141] libmachine: STDOUT: 
	I0717 11:23:31.209815    9151 main.go:141] libmachine: STDERR: 
	I0717 11:23:31.209875    9151 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/calico-306000/disk.qcow2 +20000M
	I0717 11:23:31.217940    9151 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:23:31.217955    9151 main.go:141] libmachine: STDERR: 
	I0717 11:23:31.217965    9151 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/calico-306000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/calico-306000/disk.qcow2
	I0717 11:23:31.217970    9151 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:23:31.217980    9151 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:23:31.218019    9151 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/calico-306000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/calico-306000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/calico-306000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:9b:a6:9d:16:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/calico-306000/disk.qcow2
	I0717 11:23:31.219707    9151 main.go:141] libmachine: STDOUT: 
	I0717 11:23:31.219722    9151 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:23:31.219735    9151 client.go:171] duration metric: took 236.796667ms to LocalClient.Create
	I0717 11:23:33.221839    9151 start.go:128] duration metric: took 2.279194625s to createHost
	I0717 11:23:33.221880    9151 start.go:83] releasing machines lock for "calico-306000", held for 2.27940275s
	W0717 11:23:33.221999    9151 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-306000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-306000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:23:33.226465    9151 out.go:177] 
	W0717 11:23:33.235305    9151 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:23:33.235313    9151 out.go:239] * 
	* 
	W0717 11:23:33.235990    9151 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:23:33.247272    9151 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-306000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-306000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.945733458s)

                                                
                                                
-- stdout --
	* [custom-flannel-306000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-306000" primary control-plane node in "custom-flannel-306000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-306000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:25:35.618195    9304 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:25:35.618385    9304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:25:35.618389    9304 out.go:304] Setting ErrFile to fd 2...
	I0717 11:25:35.618393    9304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:25:35.618572    9304 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:25:35.619961    9304 out.go:298] Setting JSON to false
	I0717 11:25:35.640515    9304 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6907,"bootTime":1721233828,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 11:25:35.640592    9304 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:25:35.643727    9304 out.go:177] * [custom-flannel-306000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:25:35.649822    9304 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 11:25:35.649902    9304 notify.go:220] Checking for updates...
	I0717 11:25:35.655822    9304 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:25:35.658806    9304 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:25:35.661803    9304 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:25:35.664764    9304 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 11:25:35.667790    9304 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:25:35.669155    9304 config.go:182] Loaded profile config "NoKubernetes-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0717 11:25:35.669223    9304 config.go:182] Loaded profile config "auto-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:25:35.669280    9304 config.go:182] Loaded profile config "calico-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:25:35.669346    9304 config.go:182] Loaded profile config "multinode-931000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:25:35.669402    9304 config.go:182] Loaded profile config "pause-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:25:35.669455    9304 config.go:182] Loaded profile config "running-upgrade-891000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:25:35.669509    9304 config.go:182] Loaded profile config "stopped-upgrade-058000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:25:35.669559    9304 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:25:35.673769    9304 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:25:35.680639    9304 start.go:297] selected driver: qemu2
	I0717 11:25:35.680646    9304 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:25:35.680653    9304 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:25:35.682896    9304 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:25:35.685827    9304 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:25:35.688915    9304 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:25:35.688957    9304 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0717 11:25:35.688967    9304 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0717 11:25:35.689002    9304 start.go:340] cluster config:
	{Name:custom-flannel-306000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:custom-flannel-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:25:35.692696    9304 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:25:35.700848    9304 out.go:177] * Starting "custom-flannel-306000" primary control-plane node in "custom-flannel-306000" cluster
	I0717 11:25:35.706797    9304 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:25:35.706811    9304 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:25:35.706818    9304 cache.go:56] Caching tarball of preloaded images
	I0717 11:25:35.706873    9304 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:25:35.706878    9304 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:25:35.706930    9304 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/custom-flannel-306000/config.json ...
	I0717 11:25:35.706945    9304 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/custom-flannel-306000/config.json: {Name:mk4d6d4b4f7721ad081d016b7f45caa6094a7016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:25:35.707407    9304 start.go:360] acquireMachinesLock for custom-flannel-306000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:25:35.707445    9304 start.go:364] duration metric: took 28.292µs to acquireMachinesLock for "custom-flannel-306000"
	I0717 11:25:35.707456    9304 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.2 ClusterName:custom-flannel-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:25:35.707484    9304 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:25:35.715789    9304 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:25:35.731054    9304 start.go:159] libmachine.API.Create for "custom-flannel-306000" (driver="qemu2")
	I0717 11:25:35.731087    9304 client.go:168] LocalClient.Create starting
	I0717 11:25:35.731154    9304 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:25:35.731184    9304 main.go:141] libmachine: Decoding PEM data...
	I0717 11:25:35.731197    9304 main.go:141] libmachine: Parsing certificate...
	I0717 11:25:35.731237    9304 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:25:35.731261    9304 main.go:141] libmachine: Decoding PEM data...
	I0717 11:25:35.731267    9304 main.go:141] libmachine: Parsing certificate...
	I0717 11:25:35.731731    9304 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:25:36.037875    9304 main.go:141] libmachine: Creating SSH key...
	I0717 11:25:36.117485    9304 main.go:141] libmachine: Creating Disk image...
	I0717 11:25:36.117494    9304 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:25:36.117704    9304 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/custom-flannel-306000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/custom-flannel-306000/disk.qcow2
	I0717 11:25:36.126902    9304 main.go:141] libmachine: STDOUT: 
	I0717 11:25:36.126918    9304 main.go:141] libmachine: STDERR: 
	I0717 11:25:36.126960    9304 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/custom-flannel-306000/disk.qcow2 +20000M
	I0717 11:25:36.134922    9304 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:25:36.134943    9304 main.go:141] libmachine: STDERR: 
	I0717 11:25:36.134964    9304 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/custom-flannel-306000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/custom-flannel-306000/disk.qcow2
	I0717 11:25:36.134968    9304 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:25:36.134980    9304 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:25:36.135004    9304 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/custom-flannel-306000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/custom-flannel-306000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/custom-flannel-306000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:8d:7a:6c:44:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/custom-flannel-306000/disk.qcow2
	I0717 11:25:36.136735    9304 main.go:141] libmachine: STDOUT: 
	I0717 11:25:36.136752    9304 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:25:36.136778    9304 client.go:171] duration metric: took 405.688625ms to LocalClient.Create
	I0717 11:25:38.138857    9304 start.go:128] duration metric: took 2.431365541s to createHost
	I0717 11:25:38.138885    9304 start.go:83] releasing machines lock for "custom-flannel-306000", held for 2.431438416s
	W0717 11:25:38.138909    9304 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:25:38.149572    9304 out.go:177] * Deleting "custom-flannel-306000" in qemu2 ...
	W0717 11:25:38.159675    9304 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:25:38.159686    9304 start.go:729] Will try again in 5 seconds ...
	I0717 11:25:43.161781    9304 start.go:360] acquireMachinesLock for custom-flannel-306000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:25:43.162043    9304 start.go:364] duration metric: took 215.5µs to acquireMachinesLock for "custom-flannel-306000"
	I0717 11:25:43.162092    9304 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.2 ClusterName:custom-flannel-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:25:43.162178    9304 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:25:43.169231    9304 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:25:43.197712    9304 start.go:159] libmachine.API.Create for "custom-flannel-306000" (driver="qemu2")
	I0717 11:25:43.197750    9304 client.go:168] LocalClient.Create starting
	I0717 11:25:43.197839    9304 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:25:43.197889    9304 main.go:141] libmachine: Decoding PEM data...
	I0717 11:25:43.197899    9304 main.go:141] libmachine: Parsing certificate...
	I0717 11:25:43.197942    9304 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:25:43.197973    9304 main.go:141] libmachine: Decoding PEM data...
	I0717 11:25:43.197980    9304 main.go:141] libmachine: Parsing certificate...
	I0717 11:25:43.198406    9304 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:25:43.376468    9304 main.go:141] libmachine: Creating SSH key...
	I0717 11:25:43.448447    9304 main.go:141] libmachine: Creating Disk image...
	I0717 11:25:43.448454    9304 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:25:43.448665    9304 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/custom-flannel-306000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/custom-flannel-306000/disk.qcow2
	I0717 11:25:43.457930    9304 main.go:141] libmachine: STDOUT: 
	I0717 11:25:43.457953    9304 main.go:141] libmachine: STDERR: 
	I0717 11:25:43.458012    9304 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/custom-flannel-306000/disk.qcow2 +20000M
	I0717 11:25:43.467059    9304 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:25:43.467091    9304 main.go:141] libmachine: STDERR: 
	I0717 11:25:43.467108    9304 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/custom-flannel-306000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/custom-flannel-306000/disk.qcow2
	I0717 11:25:43.467114    9304 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:25:43.467125    9304 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:25:43.467173    9304 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/custom-flannel-306000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/custom-flannel-306000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/custom-flannel-306000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:c2:30:78:55:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/custom-flannel-306000/disk.qcow2
	I0717 11:25:43.469396    9304 main.go:141] libmachine: STDOUT: 
	I0717 11:25:43.469414    9304 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:25:43.469426    9304 client.go:171] duration metric: took 271.672167ms to LocalClient.Create
	I0717 11:25:45.471530    9304 start.go:128] duration metric: took 2.309339834s to createHost
	I0717 11:25:45.471573    9304 start.go:83] releasing machines lock for "custom-flannel-306000", held for 2.30952075s
	W0717 11:25:45.471775    9304 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-306000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-306000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:25:45.481237    9304 out.go:177] 
	W0717 11:25:45.489278    9304 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:25:45.489291    9304 out.go:239] * 
	* 
	W0717 11:25:45.490311    9304 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:25:45.499223    9304 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-306000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-306000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.89362625s)

                                                
                                                
-- stdout --
	* [false-306000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-306000" primary control-plane node in "false-306000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-306000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:26:39.396779    9430 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:26:39.396929    9430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:26:39.396935    9430 out.go:304] Setting ErrFile to fd 2...
	I0717 11:26:39.396938    9430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:26:39.397067    9430 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:26:39.398199    9430 out.go:298] Setting JSON to false
	I0717 11:26:39.415120    9430 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6971,"bootTime":1721233828,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 11:26:39.415208    9430 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:26:39.419201    9430 out.go:177] * [false-306000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:26:39.427145    9430 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 11:26:39.427207    9430 notify.go:220] Checking for updates...
	I0717 11:26:39.435073    9430 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:26:39.438128    9430 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:26:39.441010    9430 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:26:39.444136    9430 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 11:26:39.447133    9430 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:26:39.450347    9430 config.go:182] Loaded profile config "NoKubernetes-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0717 11:26:39.450409    9430 config.go:182] Loaded profile config "auto-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:26:39.450469    9430 config.go:182] Loaded profile config "calico-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:26:39.450529    9430 config.go:182] Loaded profile config "custom-flannel-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:26:39.450589    9430 config.go:182] Loaded profile config "multinode-931000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:26:39.450645    9430 config.go:182] Loaded profile config "pause-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:26:39.450706    9430 config.go:182] Loaded profile config "running-upgrade-891000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:26:39.450758    9430 config.go:182] Loaded profile config "stopped-upgrade-058000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:26:39.450802    9430 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:26:39.455062    9430 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:26:39.462041    9430 start.go:297] selected driver: qemu2
	I0717 11:26:39.462049    9430 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:26:39.462056    9430 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:26:39.464309    9430 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:26:39.467082    9430 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:26:39.470178    9430 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:26:39.470211    9430 cni.go:84] Creating CNI manager for "false"
	I0717 11:26:39.470245    9430 start.go:340] cluster config:
	{Name:false-306000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:false-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:26:39.474550    9430 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:26:39.483111    9430 out.go:177] * Starting "false-306000" primary control-plane node in "false-306000" cluster
	I0717 11:26:39.486862    9430 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:26:39.486879    9430 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:26:39.486886    9430 cache.go:56] Caching tarball of preloaded images
	I0717 11:26:39.486940    9430 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:26:39.486945    9430 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:26:39.486993    9430 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/false-306000/config.json ...
	I0717 11:26:39.487005    9430 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/false-306000/config.json: {Name:mk302874f518ef136a2eb5376fa5cd23f63c8b66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:26:39.487450    9430 start.go:360] acquireMachinesLock for false-306000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:26:39.487481    9430 start.go:364] duration metric: took 25.666µs to acquireMachinesLock for "false-306000"
	I0717 11:26:39.487492    9430 start.go:93] Provisioning new machine with config: &{Name:false-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:false-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:26:39.487526    9430 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:26:39.495059    9430 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:26:39.510182    9430 start.go:159] libmachine.API.Create for "false-306000" (driver="qemu2")
	I0717 11:26:39.510224    9430 client.go:168] LocalClient.Create starting
	I0717 11:26:39.510292    9430 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:26:39.510322    9430 main.go:141] libmachine: Decoding PEM data...
	I0717 11:26:39.510333    9430 main.go:141] libmachine: Parsing certificate...
	I0717 11:26:39.510375    9430 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:26:39.510398    9430 main.go:141] libmachine: Decoding PEM data...
	I0717 11:26:39.510407    9430 main.go:141] libmachine: Parsing certificate...
	I0717 11:26:39.510779    9430 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:26:39.754498    9430 main.go:141] libmachine: Creating SSH key...
	I0717 11:26:39.873875    9430 main.go:141] libmachine: Creating Disk image...
	I0717 11:26:39.873885    9430 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:26:39.874095    9430 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/false-306000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/false-306000/disk.qcow2
	I0717 11:26:39.885718    9430 main.go:141] libmachine: STDOUT: 
	I0717 11:26:39.885746    9430 main.go:141] libmachine: STDERR: 
	I0717 11:26:39.885805    9430 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/false-306000/disk.qcow2 +20000M
	I0717 11:26:39.893586    9430 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:26:39.893601    9430 main.go:141] libmachine: STDERR: 
	I0717 11:26:39.893621    9430 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/false-306000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/false-306000/disk.qcow2
	I0717 11:26:39.893626    9430 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:26:39.893639    9430 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:26:39.893666    9430 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/false-306000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/false-306000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/false-306000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:80:39:ef:ca:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/false-306000/disk.qcow2
	I0717 11:26:39.895423    9430 main.go:141] libmachine: STDOUT: 
	I0717 11:26:39.895444    9430 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:26:39.895463    9430 client.go:171] duration metric: took 385.234625ms to LocalClient.Create
	I0717 11:26:41.897594    9430 start.go:128] duration metric: took 2.410056416s to createHost
	I0717 11:26:41.897657    9430 start.go:83] releasing machines lock for "false-306000", held for 2.41017125s
	W0717 11:26:41.897724    9430 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:26:41.912936    9430 out.go:177] * Deleting "false-306000" in qemu2 ...
	W0717 11:26:41.931324    9430 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:26:41.931342    9430 start.go:729] Will try again in 5 seconds ...
	I0717 11:26:46.933509    9430 start.go:360] acquireMachinesLock for false-306000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:26:46.933780    9430 start.go:364] duration metric: took 209.625µs to acquireMachinesLock for "false-306000"
	I0717 11:26:46.933839    9430 start.go:93] Provisioning new machine with config: &{Name:false-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:false-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:26:46.933958    9430 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:26:46.945376    9430 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:26:46.977156    9430 start.go:159] libmachine.API.Create for "false-306000" (driver="qemu2")
	I0717 11:26:46.977204    9430 client.go:168] LocalClient.Create starting
	I0717 11:26:46.977285    9430 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:26:46.977332    9430 main.go:141] libmachine: Decoding PEM data...
	I0717 11:26:46.977343    9430 main.go:141] libmachine: Parsing certificate...
	I0717 11:26:46.977393    9430 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:26:46.977421    9430 main.go:141] libmachine: Decoding PEM data...
	I0717 11:26:46.977429    9430 main.go:141] libmachine: Parsing certificate...
	I0717 11:26:46.977749    9430 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:26:47.121015    9430 main.go:141] libmachine: Creating SSH key...
	I0717 11:26:47.192193    9430 main.go:141] libmachine: Creating Disk image...
	I0717 11:26:47.192201    9430 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:26:47.192453    9430 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/false-306000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/false-306000/disk.qcow2
	I0717 11:26:47.201691    9430 main.go:141] libmachine: STDOUT: 
	I0717 11:26:47.201711    9430 main.go:141] libmachine: STDERR: 
	I0717 11:26:47.201754    9430 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/false-306000/disk.qcow2 +20000M
	I0717 11:26:47.209553    9430 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:26:47.209568    9430 main.go:141] libmachine: STDERR: 
	I0717 11:26:47.209578    9430 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/false-306000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/false-306000/disk.qcow2
	I0717 11:26:47.209584    9430 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:26:47.209595    9430 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:26:47.209633    9430 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/false-306000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/false-306000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/false-306000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:de:3d:62:77:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/false-306000/disk.qcow2
	I0717 11:26:47.211226    9430 main.go:141] libmachine: STDOUT: 
	I0717 11:26:47.211242    9430 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:26:47.211255    9430 client.go:171] duration metric: took 234.046791ms to LocalClient.Create
	I0717 11:26:49.213377    9430 start.go:128] duration metric: took 2.279402125s to createHost
	I0717 11:26:49.213408    9430 start.go:83] releasing machines lock for "false-306000", held for 2.27961975s
	W0717 11:26:49.213606    9430 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-306000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-306000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:26:49.226132    9430 out.go:177] 
	W0717 11:26:49.231167    9430 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:26:49.231178    9430 out.go:239] * 
	* 
	W0717 11:26:49.232223    9430 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:26:49.244027    9430 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-306000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-306000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.902451209s)

                                                
                                                
-- stdout --
	* [kindnet-306000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-306000" primary control-plane node in "kindnet-306000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-306000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:27:47.810611    9549 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:27:47.810995    9549 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:27:47.811000    9549 out.go:304] Setting ErrFile to fd 2...
	I0717 11:27:47.811003    9549 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:27:47.811237    9549 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:27:47.812803    9549 out.go:298] Setting JSON to false
	I0717 11:27:47.830018    9549 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7039,"bootTime":1721233828,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 11:27:47.830091    9549 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:27:47.834751    9549 out.go:177] * [kindnet-306000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:27:47.840648    9549 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 11:27:47.840715    9549 notify.go:220] Checking for updates...
	I0717 11:27:47.847688    9549 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:27:47.850673    9549 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:27:47.853695    9549 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:27:47.856691    9549 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 11:27:47.858059    9549 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:27:47.860953    9549 config.go:182] Loaded profile config "NoKubernetes-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0717 11:27:47.861019    9549 config.go:182] Loaded profile config "auto-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:27:47.861076    9549 config.go:182] Loaded profile config "calico-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:27:47.861133    9549 config.go:182] Loaded profile config "custom-flannel-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:27:47.861190    9549 config.go:182] Loaded profile config "false-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:27:47.861253    9549 config.go:182] Loaded profile config "multinode-931000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:27:47.861310    9549 config.go:182] Loaded profile config "pause-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:27:47.861365    9549 config.go:182] Loaded profile config "running-upgrade-891000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:27:47.861416    9549 config.go:182] Loaded profile config "stopped-upgrade-058000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:27:47.861473    9549 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:27:47.865735    9549 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:27:47.866865    9549 start.go:297] selected driver: qemu2
	I0717 11:27:47.866872    9549 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:27:47.866878    9549 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:27:47.869055    9549 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:27:47.872658    9549 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:27:47.875826    9549 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:27:47.875859    9549 cni.go:84] Creating CNI manager for "kindnet"
	I0717 11:27:47.875873    9549 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 11:27:47.875913    9549 start.go:340] cluster config:
	{Name:kindnet-306000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:kindnet-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:27:47.879290    9549 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:27:47.886632    9549 out.go:177] * Starting "kindnet-306000" primary control-plane node in "kindnet-306000" cluster
	I0717 11:27:47.890691    9549 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:27:47.890706    9549 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:27:47.890714    9549 cache.go:56] Caching tarball of preloaded images
	I0717 11:27:47.890779    9549 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:27:47.890784    9549 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:27:47.890831    9549 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/kindnet-306000/config.json ...
	I0717 11:27:47.890842    9549 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/kindnet-306000/config.json: {Name:mk69d64b068af7b504421aaf0ab9ff3e409f2ed1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:27:47.891348    9549 start.go:360] acquireMachinesLock for kindnet-306000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:27:47.891373    9549 start.go:364] duration metric: took 20.208µs to acquireMachinesLock for "kindnet-306000"
	I0717 11:27:47.891386    9549 start.go:93] Provisioning new machine with config: &{Name:kindnet-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:kindnet-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:27:47.891426    9549 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:27:47.898710    9549 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:27:47.913545    9549 start.go:159] libmachine.API.Create for "kindnet-306000" (driver="qemu2")
	I0717 11:27:47.913571    9549 client.go:168] LocalClient.Create starting
	I0717 11:27:47.913632    9549 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:27:47.913658    9549 main.go:141] libmachine: Decoding PEM data...
	I0717 11:27:47.913667    9549 main.go:141] libmachine: Parsing certificate...
	I0717 11:27:47.913711    9549 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:27:47.913726    9549 main.go:141] libmachine: Decoding PEM data...
	I0717 11:27:47.913735    9549 main.go:141] libmachine: Parsing certificate...
	I0717 11:27:47.914100    9549 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:27:48.218540    9549 main.go:141] libmachine: Creating SSH key...
	I0717 11:27:48.302354    9549 main.go:141] libmachine: Creating Disk image...
	I0717 11:27:48.302361    9549 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:27:48.302560    9549 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kindnet-306000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kindnet-306000/disk.qcow2
	I0717 11:27:48.313929    9549 main.go:141] libmachine: STDOUT: 
	I0717 11:27:48.313954    9549 main.go:141] libmachine: STDERR: 
	I0717 11:27:48.314011    9549 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kindnet-306000/disk.qcow2 +20000M
	I0717 11:27:48.322083    9549 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:27:48.322098    9549 main.go:141] libmachine: STDERR: 
	I0717 11:27:48.322119    9549 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kindnet-306000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kindnet-306000/disk.qcow2
	I0717 11:27:48.322124    9549 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:27:48.322133    9549 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:27:48.322162    9549 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kindnet-306000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kindnet-306000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kindnet-306000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:6d:12:7c:6b:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kindnet-306000/disk.qcow2
	I0717 11:27:48.323822    9549 main.go:141] libmachine: STDOUT: 
	I0717 11:27:48.323837    9549 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:27:48.323860    9549 client.go:171] duration metric: took 410.286209ms to LocalClient.Create
	I0717 11:27:50.325938    9549 start.go:128] duration metric: took 2.434504917s to createHost
	I0717 11:27:50.325960    9549 start.go:83] releasing machines lock for "kindnet-306000", held for 2.434585417s
	W0717 11:27:50.325994    9549 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:27:50.336999    9549 out.go:177] * Deleting "kindnet-306000" in qemu2 ...
	W0717 11:27:50.356908    9549 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:27:50.356917    9549 start.go:729] Will try again in 5 seconds ...
	I0717 11:27:55.359035    9549 start.go:360] acquireMachinesLock for kindnet-306000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:27:55.359259    9549 start.go:364] duration metric: took 176.125µs to acquireMachinesLock for "kindnet-306000"
	I0717 11:27:55.359320    9549 start.go:93] Provisioning new machine with config: &{Name:kindnet-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:kindnet-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:27:55.359479    9549 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:27:55.366790    9549 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:27:55.388915    9549 start.go:159] libmachine.API.Create for "kindnet-306000" (driver="qemu2")
	I0717 11:27:55.388948    9549 client.go:168] LocalClient.Create starting
	I0717 11:27:55.389024    9549 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:27:55.389065    9549 main.go:141] libmachine: Decoding PEM data...
	I0717 11:27:55.389077    9549 main.go:141] libmachine: Parsing certificate...
	I0717 11:27:55.389119    9549 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:27:55.389147    9549 main.go:141] libmachine: Decoding PEM data...
	I0717 11:27:55.389156    9549 main.go:141] libmachine: Parsing certificate...
	I0717 11:27:55.389472    9549 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:27:55.569160    9549 main.go:141] libmachine: Creating SSH key...
	I0717 11:27:55.619932    9549 main.go:141] libmachine: Creating Disk image...
	I0717 11:27:55.619952    9549 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:27:55.620182    9549 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kindnet-306000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kindnet-306000/disk.qcow2
	I0717 11:27:55.629478    9549 main.go:141] libmachine: STDOUT: 
	I0717 11:27:55.629496    9549 main.go:141] libmachine: STDERR: 
	I0717 11:27:55.629549    9549 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kindnet-306000/disk.qcow2 +20000M
	I0717 11:27:55.637481    9549 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:27:55.637503    9549 main.go:141] libmachine: STDERR: 
	I0717 11:27:55.637519    9549 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kindnet-306000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kindnet-306000/disk.qcow2
	I0717 11:27:55.637524    9549 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:27:55.637530    9549 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:27:55.637564    9549 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kindnet-306000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kindnet-306000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kindnet-306000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:70:16:b4:62:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kindnet-306000/disk.qcow2
	I0717 11:27:55.639269    9549 main.go:141] libmachine: STDOUT: 
	I0717 11:27:55.639285    9549 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:27:55.639297    9549 client.go:171] duration metric: took 250.346167ms to LocalClient.Create
	I0717 11:27:57.641426    9549 start.go:128] duration metric: took 2.281934584s to createHost
	I0717 11:27:57.641469    9549 start.go:83] releasing machines lock for "kindnet-306000", held for 2.282195416s
	W0717 11:27:57.641655    9549 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-306000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-306000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:27:57.650101    9549 out.go:177] 
	W0717 11:27:57.657091    9549 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:27:57.657104    9549 out.go:239] * 
	* 
	W0717 11:27:57.658035    9549 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:27:57.670069    9549 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-306000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-306000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.887895s)

                                                
                                                
-- stdout --
	* [flannel-306000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-306000" primary control-plane node in "flannel-306000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-306000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:28:51.350783    9673 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:28:51.350913    9673 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:28:51.350916    9673 out.go:304] Setting ErrFile to fd 2...
	I0717 11:28:51.350919    9673 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:28:51.351062    9673 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:28:51.352185    9673 out.go:298] Setting JSON to false
	I0717 11:28:51.369556    9673 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7103,"bootTime":1721233828,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 11:28:51.369622    9673 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:28:51.374580    9673 out.go:177] * [flannel-306000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:28:51.381561    9673 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 11:28:51.381698    9673 notify.go:220] Checking for updates...
	I0717 11:28:51.388460    9673 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:28:51.392514    9673 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:28:51.395521    9673 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:28:51.398504    9673 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 11:28:51.401481    9673 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:28:51.404841    9673 config.go:182] Loaded profile config "NoKubernetes-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0717 11:28:51.404907    9673 config.go:182] Loaded profile config "auto-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:28:51.404971    9673 config.go:182] Loaded profile config "calico-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:28:51.405031    9673 config.go:182] Loaded profile config "custom-flannel-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:28:51.405091    9673 config.go:182] Loaded profile config "false-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:28:51.405151    9673 config.go:182] Loaded profile config "kindnet-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:28:51.405207    9673 config.go:182] Loaded profile config "multinode-931000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:28:51.405260    9673 config.go:182] Loaded profile config "pause-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:28:51.405316    9673 config.go:182] Loaded profile config "running-upgrade-891000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:28:51.405373    9673 config.go:182] Loaded profile config "stopped-upgrade-058000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:28:51.405421    9673 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:28:51.409477    9673 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:28:51.418542    9673 start.go:297] selected driver: qemu2
	I0717 11:28:51.418548    9673 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:28:51.418553    9673 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:28:51.421488    9673 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:28:51.425431    9673 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:28:51.429737    9673 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:28:51.429775    9673 cni.go:84] Creating CNI manager for "flannel"
	I0717 11:28:51.429786    9673 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0717 11:28:51.429833    9673 start.go:340] cluster config:
	{Name:flannel-306000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:flannel-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:28:51.434031    9673 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:28:51.437493    9673 out.go:177] * Starting "flannel-306000" primary control-plane node in "flannel-306000" cluster
	I0717 11:28:51.444537    9673 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:28:51.444551    9673 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:28:51.444564    9673 cache.go:56] Caching tarball of preloaded images
	I0717 11:28:51.444616    9673 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:28:51.444622    9673 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:28:51.444691    9673 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/flannel-306000/config.json ...
	I0717 11:28:51.444703    9673 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/flannel-306000/config.json: {Name:mk4586588b1371438aa2644fecbaa74835b42a43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:28:51.444991    9673 start.go:360] acquireMachinesLock for flannel-306000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:28:51.445022    9673 start.go:364] duration metric: took 25.583µs to acquireMachinesLock for "flannel-306000"
	I0717 11:28:51.445031    9673 start.go:93] Provisioning new machine with config: &{Name:flannel-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:flannel-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:28:51.445062    9673 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:28:51.449507    9673 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:28:51.464534    9673 start.go:159] libmachine.API.Create for "flannel-306000" (driver="qemu2")
	I0717 11:28:51.464566    9673 client.go:168] LocalClient.Create starting
	I0717 11:28:51.464625    9673 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:28:51.464661    9673 main.go:141] libmachine: Decoding PEM data...
	I0717 11:28:51.464671    9673 main.go:141] libmachine: Parsing certificate...
	I0717 11:28:51.464712    9673 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:28:51.464741    9673 main.go:141] libmachine: Decoding PEM data...
	I0717 11:28:51.464750    9673 main.go:141] libmachine: Parsing certificate...
	I0717 11:28:51.465250    9673 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:28:51.691770    9673 main.go:141] libmachine: Creating SSH key...
	I0717 11:28:51.758599    9673 main.go:141] libmachine: Creating Disk image...
	I0717 11:28:51.758605    9673 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:28:51.758841    9673 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/flannel-306000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/flannel-306000/disk.qcow2
	I0717 11:28:51.768824    9673 main.go:141] libmachine: STDOUT: 
	I0717 11:28:51.768849    9673 main.go:141] libmachine: STDERR: 
	I0717 11:28:51.768897    9673 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/flannel-306000/disk.qcow2 +20000M
	I0717 11:28:51.777088    9673 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:28:51.777107    9673 main.go:141] libmachine: STDERR: 
	I0717 11:28:51.777117    9673 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/flannel-306000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/flannel-306000/disk.qcow2
	I0717 11:28:51.777121    9673 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:28:51.777157    9673 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:28:51.777181    9673 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/flannel-306000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/flannel-306000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/flannel-306000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:ec:e9:5d:46:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/flannel-306000/disk.qcow2
	I0717 11:28:51.778879    9673 main.go:141] libmachine: STDOUT: 
	I0717 11:28:51.778894    9673 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:28:51.778912    9673 client.go:171] duration metric: took 314.342208ms to LocalClient.Create
	I0717 11:28:53.781030    9673 start.go:128] duration metric: took 2.335954916s to createHost
	I0717 11:28:53.781076    9673 start.go:83] releasing machines lock for "flannel-306000", held for 2.336049292s
	W0717 11:28:53.781142    9673 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:28:53.790701    9673 out.go:177] * Deleting "flannel-306000" in qemu2 ...
	W0717 11:28:53.812697    9673 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:28:53.812714    9673 start.go:729] Will try again in 5 seconds ...
	I0717 11:28:58.814861    9673 start.go:360] acquireMachinesLock for flannel-306000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:28:58.814989    9673 start.go:364] duration metric: took 102.75µs to acquireMachinesLock for "flannel-306000"
	I0717 11:28:58.815021    9673 start.go:93] Provisioning new machine with config: &{Name:flannel-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:flannel-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:28:58.815072    9673 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:28:58.825571    9673 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:28:58.843304    9673 start.go:159] libmachine.API.Create for "flannel-306000" (driver="qemu2")
	I0717 11:28:58.843341    9673 client.go:168] LocalClient.Create starting
	I0717 11:28:58.843412    9673 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:28:58.843455    9673 main.go:141] libmachine: Decoding PEM data...
	I0717 11:28:58.843463    9673 main.go:141] libmachine: Parsing certificate...
	I0717 11:28:58.843500    9673 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:28:58.843524    9673 main.go:141] libmachine: Decoding PEM data...
	I0717 11:28:58.843530    9673 main.go:141] libmachine: Parsing certificate...
	I0717 11:28:58.843913    9673 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:28:58.983923    9673 main.go:141] libmachine: Creating SSH key...
	I0717 11:28:59.147971    9673 main.go:141] libmachine: Creating Disk image...
	I0717 11:28:59.147985    9673 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:28:59.148244    9673 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/flannel-306000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/flannel-306000/disk.qcow2
	I0717 11:28:59.158198    9673 main.go:141] libmachine: STDOUT: 
	I0717 11:28:59.158214    9673 main.go:141] libmachine: STDERR: 
	I0717 11:28:59.158269    9673 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/flannel-306000/disk.qcow2 +20000M
	I0717 11:28:59.166334    9673 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:28:59.166347    9673 main.go:141] libmachine: STDERR: 
	I0717 11:28:59.166365    9673 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/flannel-306000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/flannel-306000/disk.qcow2
	I0717 11:28:59.166368    9673 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:28:59.166383    9673 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:28:59.166409    9673 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/flannel-306000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/flannel-306000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/flannel-306000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:76:02:bf:37:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/flannel-306000/disk.qcow2
	I0717 11:28:59.168087    9673 main.go:141] libmachine: STDOUT: 
	I0717 11:28:59.168108    9673 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:28:59.168125    9673 client.go:171] duration metric: took 324.779083ms to LocalClient.Create
	I0717 11:29:01.170230    9673 start.go:128] duration metric: took 2.35514675s to createHost
	I0717 11:29:01.170265    9673 start.go:83] releasing machines lock for "flannel-306000", held for 2.35527225s
	W0717 11:29:01.170467    9673 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-306000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-306000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:29:01.177836    9673 out.go:177] 
	W0717 11:29:01.182833    9673 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:29:01.182853    9673 out.go:239] * 
	* 
	W0717 11:29:01.183672    9673 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:29:01.194832    9673 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-306000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-306000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.927956583s)

                                                
                                                
-- stdout --
	* [enable-default-cni-306000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-306000" primary control-plane node in "enable-default-cni-306000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-306000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:29:59.865540    9796 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:29:59.865690    9796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:29:59.865693    9796 out.go:304] Setting ErrFile to fd 2...
	I0717 11:29:59.865696    9796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:29:59.865855    9796 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:29:59.867084    9796 out.go:298] Setting JSON to false
	I0717 11:29:59.885199    9796 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7171,"bootTime":1721233828,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 11:29:59.885278    9796 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:29:59.889117    9796 out.go:177] * [enable-default-cni-306000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:29:59.896070    9796 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 11:29:59.896109    9796 notify.go:220] Checking for updates...
	I0717 11:29:59.903818    9796 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:29:59.907964    9796 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:29:59.910971    9796 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:29:59.912434    9796 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 11:29:59.915985    9796 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:29:59.919267    9796 config.go:182] Loaded profile config "NoKubernetes-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0717 11:29:59.919332    9796 config.go:182] Loaded profile config "auto-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:29:59.919389    9796 config.go:182] Loaded profile config "calico-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:29:59.919452    9796 config.go:182] Loaded profile config "custom-flannel-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:29:59.919513    9796 config.go:182] Loaded profile config "false-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:29:59.919565    9796 config.go:182] Loaded profile config "flannel-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:29:59.919617    9796 config.go:182] Loaded profile config "kindnet-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:29:59.919670    9796 config.go:182] Loaded profile config "multinode-931000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:29:59.919726    9796 config.go:182] Loaded profile config "pause-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:29:59.919777    9796 config.go:182] Loaded profile config "running-upgrade-891000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:29:59.919831    9796 config.go:182] Loaded profile config "stopped-upgrade-058000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:29:59.919876    9796 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:29:59.923766    9796 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:29:59.931035    9796 start.go:297] selected driver: qemu2
	I0717 11:29:59.931042    9796 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:29:59.931049    9796 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:29:59.933207    9796 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:29:59.936949    9796 out.go:177] * Automatically selected the socket_vmnet network
	E0717 11:29:59.941032    9796 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0717 11:29:59.941042    9796 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:29:59.941058    9796 cni.go:84] Creating CNI manager for "bridge"
	I0717 11:29:59.941062    9796 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 11:29:59.941092    9796 start.go:340] cluster config:
	{Name:enable-default-cni-306000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:enable-default-cni-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:29:59.944382    9796 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:29:59.948956    9796 out.go:177] * Starting "enable-default-cni-306000" primary control-plane node in "enable-default-cni-306000" cluster
	I0717 11:29:59.956949    9796 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:29:59.956964    9796 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:29:59.956976    9796 cache.go:56] Caching tarball of preloaded images
	I0717 11:29:59.957048    9796 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:29:59.957053    9796 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:29:59.957105    9796 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/enable-default-cni-306000/config.json ...
	I0717 11:29:59.957116    9796 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/enable-default-cni-306000/config.json: {Name:mk7c049c9c40bdef737828a2e53d77f60410a0a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:29:59.957395    9796 start.go:360] acquireMachinesLock for enable-default-cni-306000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:29:59.957420    9796 start.go:364] duration metric: took 20.083µs to acquireMachinesLock for "enable-default-cni-306000"
	I0717 11:29:59.957429    9796 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:enable-default-cni-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:29:59.957460    9796 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:29:59.965966    9796 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:29:59.981572    9796 start.go:159] libmachine.API.Create for "enable-default-cni-306000" (driver="qemu2")
	I0717 11:29:59.981597    9796 client.go:168] LocalClient.Create starting
	I0717 11:29:59.981666    9796 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:29:59.981693    9796 main.go:141] libmachine: Decoding PEM data...
	I0717 11:29:59.981701    9796 main.go:141] libmachine: Parsing certificate...
	I0717 11:29:59.981735    9796 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:29:59.981751    9796 main.go:141] libmachine: Decoding PEM data...
	I0717 11:29:59.981762    9796 main.go:141] libmachine: Parsing certificate...
	I0717 11:29:59.982116    9796 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:30:00.259398    9796 main.go:141] libmachine: Creating SSH key...
	I0717 11:30:00.317371    9796 main.go:141] libmachine: Creating Disk image...
	I0717 11:30:00.317378    9796 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:30:00.317576    9796 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/enable-default-cni-306000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/enable-default-cni-306000/disk.qcow2
	I0717 11:30:00.329646    9796 main.go:141] libmachine: STDOUT: 
	I0717 11:30:00.329668    9796 main.go:141] libmachine: STDERR: 
	I0717 11:30:00.329727    9796 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/enable-default-cni-306000/disk.qcow2 +20000M
	I0717 11:30:00.338261    9796 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:30:00.338276    9796 main.go:141] libmachine: STDERR: 
	I0717 11:30:00.338288    9796 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/enable-default-cni-306000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/enable-default-cni-306000/disk.qcow2
	I0717 11:30:00.338294    9796 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:30:00.338309    9796 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:30:00.338337    9796 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/enable-default-cni-306000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/enable-default-cni-306000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/enable-default-cni-306000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:61:b6:3d:3b:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/enable-default-cni-306000/disk.qcow2
	I0717 11:30:00.340433    9796 main.go:141] libmachine: STDOUT: 
	I0717 11:30:00.340447    9796 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:30:00.340471    9796 client.go:171] duration metric: took 358.859458ms to LocalClient.Create
	I0717 11:30:02.342613    9796 start.go:128] duration metric: took 2.385139041s to createHost
	I0717 11:30:02.342663    9796 start.go:83] releasing machines lock for "enable-default-cni-306000", held for 2.385237708s
	W0717 11:30:02.342719    9796 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:30:02.351290    9796 out.go:177] * Deleting "enable-default-cni-306000" in qemu2 ...
	W0717 11:30:02.373161    9796 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:30:02.373180    9796 start.go:729] Will try again in 5 seconds ...
	I0717 11:30:07.375315    9796 start.go:360] acquireMachinesLock for enable-default-cni-306000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:30:07.375577    9796 start.go:364] duration metric: took 205.792µs to acquireMachinesLock for "enable-default-cni-306000"
	I0717 11:30:07.375627    9796 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:enable-default-cni-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:30:07.375723    9796 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:30:07.389099    9796 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:30:07.418355    9796 start.go:159] libmachine.API.Create for "enable-default-cni-306000" (driver="qemu2")
	I0717 11:30:07.418395    9796 client.go:168] LocalClient.Create starting
	I0717 11:30:07.418501    9796 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:30:07.418550    9796 main.go:141] libmachine: Decoding PEM data...
	I0717 11:30:07.418560    9796 main.go:141] libmachine: Parsing certificate...
	I0717 11:30:07.418601    9796 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:30:07.418627    9796 main.go:141] libmachine: Decoding PEM data...
	I0717 11:30:07.418634    9796 main.go:141] libmachine: Parsing certificate...
	I0717 11:30:07.419090    9796 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:30:07.588292    9796 main.go:141] libmachine: Creating SSH key...
	I0717 11:30:07.694478    9796 main.go:141] libmachine: Creating Disk image...
	I0717 11:30:07.694490    9796 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:30:07.694712    9796 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/enable-default-cni-306000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/enable-default-cni-306000/disk.qcow2
	I0717 11:30:07.704100    9796 main.go:141] libmachine: STDOUT: 
	I0717 11:30:07.704118    9796 main.go:141] libmachine: STDERR: 
	I0717 11:30:07.704162    9796 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/enable-default-cni-306000/disk.qcow2 +20000M
	I0717 11:30:07.712023    9796 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:30:07.712036    9796 main.go:141] libmachine: STDERR: 
	I0717 11:30:07.712048    9796 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/enable-default-cni-306000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/enable-default-cni-306000/disk.qcow2
	I0717 11:30:07.712054    9796 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:30:07.712065    9796 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:30:07.712093    9796 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/enable-default-cni-306000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/enable-default-cni-306000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/enable-default-cni-306000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:00:c6:88:b8:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/enable-default-cni-306000/disk.qcow2
	I0717 11:30:07.713750    9796 main.go:141] libmachine: STDOUT: 
	I0717 11:30:07.713770    9796 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:30:07.713785    9796 client.go:171] duration metric: took 295.384708ms to LocalClient.Create
	I0717 11:30:09.715938    9796 start.go:128] duration metric: took 2.340204375s to createHost
	I0717 11:30:09.715974    9796 start.go:83] releasing machines lock for "enable-default-cni-306000", held for 2.340382958s
	W0717 11:30:09.716084    9796 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-306000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-306000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:30:09.726170    9796 out.go:177] 
	W0717 11:30:09.730009    9796 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:30:09.730015    9796 out.go:239] * 
	* 
	W0717 11:30:09.730484    9796 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:30:09.738172    9796 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (10.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-306000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-306000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (10.007081708s)

                                                
                                                
-- stdout --
	* [bridge-306000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-306000" primary control-plane node in "bridge-306000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-306000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:31:03.480608    9935 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:31:03.480805    9935 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:31:03.480809    9935 out.go:304] Setting ErrFile to fd 2...
	I0717 11:31:03.480813    9935 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:31:03.481007    9935 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:31:03.482424    9935 out.go:298] Setting JSON to false
	I0717 11:31:03.501341    9935 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7235,"bootTime":1721233828,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 11:31:03.501406    9935 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:31:03.505971    9935 out.go:177] * [bridge-306000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:31:03.512010    9935 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 11:31:03.512082    9935 notify.go:220] Checking for updates...
	I0717 11:31:03.518978    9935 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:31:03.521955    9935 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:31:03.524997    9935 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:31:03.527935    9935 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 11:31:03.530955    9935 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:31:03.534329    9935 config.go:182] Loaded profile config "NoKubernetes-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0717 11:31:03.534402    9935 config.go:182] Loaded profile config "auto-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:31:03.534472    9935 config.go:182] Loaded profile config "calico-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:31:03.534530    9935 config.go:182] Loaded profile config "custom-flannel-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:31:03.534594    9935 config.go:182] Loaded profile config "enable-default-cni-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:31:03.534651    9935 config.go:182] Loaded profile config "false-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:31:03.534707    9935 config.go:182] Loaded profile config "flannel-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:31:03.534762    9935 config.go:182] Loaded profile config "kindnet-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:31:03.534819    9935 config.go:182] Loaded profile config "multinode-931000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:31:03.534872    9935 config.go:182] Loaded profile config "pause-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:31:03.534925    9935 config.go:182] Loaded profile config "running-upgrade-891000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:31:03.534976    9935 config.go:182] Loaded profile config "stopped-upgrade-058000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:31:03.535028    9935 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:31:03.538923    9935 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:31:03.545886    9935 start.go:297] selected driver: qemu2
	I0717 11:31:03.545891    9935 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:31:03.545899    9935 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:31:03.548157    9935 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:31:03.552984    9935 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:31:03.556982    9935 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:31:03.557011    9935 cni.go:84] Creating CNI manager for "bridge"
	I0717 11:31:03.557014    9935 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 11:31:03.557046    9935 start.go:340] cluster config:
	{Name:bridge-306000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:bridge-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:31:03.560590    9935 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:31:03.563990    9935 out.go:177] * Starting "bridge-306000" primary control-plane node in "bridge-306000" cluster
	I0717 11:31:03.571901    9935 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:31:03.571914    9935 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:31:03.571923    9935 cache.go:56] Caching tarball of preloaded images
	I0717 11:31:03.571975    9935 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:31:03.571981    9935 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:31:03.572043    9935 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/bridge-306000/config.json ...
	I0717 11:31:03.572054    9935 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/bridge-306000/config.json: {Name:mk43f06f306a2835aa0d0f9e534a52409fcf2fe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:31:03.572262    9935 start.go:360] acquireMachinesLock for bridge-306000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:31:03.572291    9935 start.go:364] duration metric: took 24.666µs to acquireMachinesLock for "bridge-306000"
	I0717 11:31:03.572300    9935 start.go:93] Provisioning new machine with config: &{Name:bridge-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:bridge-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:31:03.572324    9935 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:31:03.575953    9935 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:31:03.591429    9935 start.go:159] libmachine.API.Create for "bridge-306000" (driver="qemu2")
	I0717 11:31:03.591461    9935 client.go:168] LocalClient.Create starting
	I0717 11:31:03.591528    9935 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:31:03.591558    9935 main.go:141] libmachine: Decoding PEM data...
	I0717 11:31:03.591566    9935 main.go:141] libmachine: Parsing certificate...
	I0717 11:31:03.591605    9935 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:31:03.591633    9935 main.go:141] libmachine: Decoding PEM data...
	I0717 11:31:03.591640    9935 main.go:141] libmachine: Parsing certificate...
	I0717 11:31:03.592027    9935 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:31:03.804622    9935 main.go:141] libmachine: Creating SSH key...
	I0717 11:31:04.078612    9935 main.go:141] libmachine: Creating Disk image...
	I0717 11:31:04.078623    9935 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:31:04.078861    9935 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/bridge-306000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/bridge-306000/disk.qcow2
	I0717 11:31:04.090836    9935 main.go:141] libmachine: STDOUT: 
	I0717 11:31:04.090860    9935 main.go:141] libmachine: STDERR: 
	I0717 11:31:04.090927    9935 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/bridge-306000/disk.qcow2 +20000M
	I0717 11:31:04.099545    9935 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:31:04.099565    9935 main.go:141] libmachine: STDERR: 
	I0717 11:31:04.099582    9935 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/bridge-306000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/bridge-306000/disk.qcow2
	I0717 11:31:04.099588    9935 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:31:04.099601    9935 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:31:04.099627    9935 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/bridge-306000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/bridge-306000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/bridge-306000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:e2:e2:1e:7e:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/bridge-306000/disk.qcow2
	I0717 11:31:04.101407    9935 main.go:141] libmachine: STDOUT: 
	I0717 11:31:04.101421    9935 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:31:04.101438    9935 client.go:171] duration metric: took 509.968833ms to LocalClient.Create
	I0717 11:31:06.103537    9935 start.go:128] duration metric: took 2.531201166s to createHost
	I0717 11:31:06.103571    9935 start.go:83] releasing machines lock for "bridge-306000", held for 2.531276958s
	W0717 11:31:06.103619    9935 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:31:06.116076    9935 out.go:177] * Deleting "bridge-306000" in qemu2 ...
	W0717 11:31:06.134787    9935 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:31:06.134801    9935 start.go:729] Will try again in 5 seconds ...
	I0717 11:31:11.136965    9935 start.go:360] acquireMachinesLock for bridge-306000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:31:11.137261    9935 start.go:364] duration metric: took 224.791µs to acquireMachinesLock for "bridge-306000"
	I0717 11:31:11.137366    9935 start.go:93] Provisioning new machine with config: &{Name:bridge-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:bridge-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:31:11.137514    9935 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:31:11.151930    9935 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:31:11.182194    9935 start.go:159] libmachine.API.Create for "bridge-306000" (driver="qemu2")
	I0717 11:31:11.182242    9935 client.go:168] LocalClient.Create starting
	I0717 11:31:11.182341    9935 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:31:11.182391    9935 main.go:141] libmachine: Decoding PEM data...
	I0717 11:31:11.182406    9935 main.go:141] libmachine: Parsing certificate...
	I0717 11:31:11.182462    9935 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:31:11.182498    9935 main.go:141] libmachine: Decoding PEM data...
	I0717 11:31:11.182509    9935 main.go:141] libmachine: Parsing certificate...
	I0717 11:31:11.183037    9935 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:31:11.327704    9935 main.go:141] libmachine: Creating SSH key...
	I0717 11:31:11.362115    9935 main.go:141] libmachine: Creating Disk image...
	I0717 11:31:11.362125    9935 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:31:11.362314    9935 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/bridge-306000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/bridge-306000/disk.qcow2
	I0717 11:31:11.371515    9935 main.go:141] libmachine: STDOUT: 
	I0717 11:31:11.371532    9935 main.go:141] libmachine: STDERR: 
	I0717 11:31:11.371594    9935 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/bridge-306000/disk.qcow2 +20000M
	I0717 11:31:11.379721    9935 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:31:11.379736    9935 main.go:141] libmachine: STDERR: 
	I0717 11:31:11.379756    9935 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/bridge-306000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/bridge-306000/disk.qcow2
	I0717 11:31:11.379760    9935 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:31:11.379770    9935 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:31:11.379805    9935 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/bridge-306000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/bridge-306000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/bridge-306000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:58:4b:65:b0:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/bridge-306000/disk.qcow2
	I0717 11:31:11.381653    9935 main.go:141] libmachine: STDOUT: 
	I0717 11:31:11.381674    9935 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:31:11.381687    9935 client.go:171] duration metric: took 199.437083ms to LocalClient.Create
	I0717 11:31:13.384023    9935 start.go:128] duration metric: took 2.246432875s to createHost
	I0717 11:31:13.384147    9935 start.go:83] releasing machines lock for "bridge-306000", held for 2.246869208s
	W0717 11:31:13.384492    9935 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-306000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-306000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:31:13.396996    9935 out.go:177] 
	W0717 11:31:13.401130    9935 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:31:13.401160    9935 out.go:239] * 
	* 
	W0717 11:31:13.403757    9935 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:31:13.414093    9935 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (10.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (7201.093s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-306000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-306000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.885361209s)

                                                
                                                
-- stdout --
	* [kubenet-306000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-306000" primary control-plane node in "kubenet-306000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-306000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:32:11.815876   10057 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:32:11.816043   10057 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:32:11.816048   10057 out.go:304] Setting ErrFile to fd 2...
	I0717 11:32:11.816051   10057 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:32:11.816224   10057 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 11:32:11.817848   10057 out.go:298] Setting JSON to false
	I0717 11:32:11.837926   10057 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7303,"bootTime":1721233828,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 11:32:11.838020   10057 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:32:11.842782   10057 out.go:177] * [kubenet-306000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:32:11.845750   10057 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 11:32:11.845782   10057 notify.go:220] Checking for updates...
	I0717 11:32:11.852709   10057 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 11:32:11.855732   10057 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:32:11.858645   10057 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:32:11.861698   10057 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 11:32:11.864698   10057 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:32:11.867891   10057 config.go:182] Loaded profile config "NoKubernetes-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0717 11:32:11.867959   10057 config.go:182] Loaded profile config "auto-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:32:11.868022   10057 config.go:182] Loaded profile config "bridge-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:32:11.868082   10057 config.go:182] Loaded profile config "calico-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:32:11.868137   10057 config.go:182] Loaded profile config "custom-flannel-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:32:11.868190   10057 config.go:182] Loaded profile config "enable-default-cni-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:32:11.868245   10057 config.go:182] Loaded profile config "false-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:32:11.868305   10057 config.go:182] Loaded profile config "flannel-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:32:11.868357   10057 config.go:182] Loaded profile config "kindnet-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:32:11.868417   10057 config.go:182] Loaded profile config "multinode-931000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:32:11.868472   10057 config.go:182] Loaded profile config "pause-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:32:11.868524   10057 config.go:182] Loaded profile config "running-upgrade-891000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:32:11.868576   10057 config.go:182] Loaded profile config "stopped-upgrade-058000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:32:11.868628   10057 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:32:11.872659   10057 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:32:11.878687   10057 start.go:297] selected driver: qemu2
	I0717 11:32:11.878695   10057 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:32:11.878702   10057 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:32:11.880964   10057 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:32:11.884646   10057 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:32:11.888781   10057 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:32:11.888810   10057 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0717 11:32:11.888840   10057 start.go:340] cluster config:
	{Name:kubenet-306000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:kubenet-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:32:11.892284   10057 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:32:11.895766   10057 out.go:177] * Starting "kubenet-306000" primary control-plane node in "kubenet-306000" cluster
	I0717 11:32:11.903495   10057 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:32:11.903509   10057 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:32:11.903517   10057 cache.go:56] Caching tarball of preloaded images
	I0717 11:32:11.903569   10057 preload.go:172] Found /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:32:11.903574   10057 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:32:11.903625   10057 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/kubenet-306000/config.json ...
	I0717 11:32:11.903641   10057 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/kubenet-306000/config.json: {Name:mk6629bb6ec4fa24cc716d9a5b6aee4b6140070f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:32:11.904084   10057 start.go:360] acquireMachinesLock for kubenet-306000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:32:11.904120   10057 start.go:364] duration metric: took 30.667µs to acquireMachinesLock for "kubenet-306000"
	I0717 11:32:11.904129   10057 start.go:93] Provisioning new machine with config: &{Name:kubenet-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:kubenet-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:32:11.904161   10057 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:32:11.912552   10057 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:32:11.928241   10057 start.go:159] libmachine.API.Create for "kubenet-306000" (driver="qemu2")
	I0717 11:32:11.928272   10057 client.go:168] LocalClient.Create starting
	I0717 11:32:11.928340   10057 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:32:11.928374   10057 main.go:141] libmachine: Decoding PEM data...
	I0717 11:32:11.928385   10057 main.go:141] libmachine: Parsing certificate...
	I0717 11:32:11.928428   10057 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:32:11.928450   10057 main.go:141] libmachine: Decoding PEM data...
	I0717 11:32:11.928462   10057 main.go:141] libmachine: Parsing certificate...
	I0717 11:32:11.928886   10057 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:32:12.202185   10057 main.go:141] libmachine: Creating SSH key...
	I0717 11:32:12.233356   10057 main.go:141] libmachine: Creating Disk image...
	I0717 11:32:12.233365   10057 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:32:12.233560   10057 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubenet-306000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubenet-306000/disk.qcow2
	I0717 11:32:12.243004   10057 main.go:141] libmachine: STDOUT: 
	I0717 11:32:12.243041   10057 main.go:141] libmachine: STDERR: 
	I0717 11:32:12.243091   10057 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubenet-306000/disk.qcow2 +20000M
	I0717 11:32:12.252528   10057 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:32:12.252551   10057 main.go:141] libmachine: STDERR: 
	I0717 11:32:12.252567   10057 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubenet-306000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubenet-306000/disk.qcow2
	I0717 11:32:12.252573   10057 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:32:12.252583   10057 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:32:12.252610   10057 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubenet-306000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubenet-306000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubenet-306000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:8c:4c:98:25:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubenet-306000/disk.qcow2
	I0717 11:32:12.254677   10057 main.go:141] libmachine: STDOUT: 
	I0717 11:32:12.254694   10057 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:32:12.254713   10057 client.go:171] duration metric: took 326.438292ms to LocalClient.Create
	I0717 11:32:14.257084   10057 start.go:128] duration metric: took 2.352901166s to createHost
	I0717 11:32:14.257171   10057 start.go:83] releasing machines lock for "kubenet-306000", held for 2.3530455s
	W0717 11:32:14.257242   10057 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:32:14.266296   10057 out.go:177] * Deleting "kubenet-306000" in qemu2 ...
	W0717 11:32:14.280020   10057 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:32:14.280032   10057 start.go:729] Will try again in 5 seconds ...
	I0717 11:32:19.282201   10057 start.go:360] acquireMachinesLock for kubenet-306000: {Name:mkd0c86c02f3ae7d09dee22405c716e34532ac42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:32:19.282448   10057 start.go:364] duration metric: took 176.167µs to acquireMachinesLock for "kubenet-306000"
	I0717 11:32:19.282516   10057 start.go:93] Provisioning new machine with config: &{Name:kubenet-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:kubenet-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:32:19.282600   10057 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:32:19.292792   10057 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:32:19.321300   10057 start.go:159] libmachine.API.Create for "kubenet-306000" (driver="qemu2")
	I0717 11:32:19.321339   10057 client.go:168] LocalClient.Create starting
	I0717 11:32:19.321436   10057 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/ca.pem
	I0717 11:32:19.321491   10057 main.go:141] libmachine: Decoding PEM data...
	I0717 11:32:19.321503   10057 main.go:141] libmachine: Parsing certificate...
	I0717 11:32:19.321556   10057 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19282-6331/.minikube/certs/cert.pem
	I0717 11:32:19.321590   10057 main.go:141] libmachine: Decoding PEM data...
	I0717 11:32:19.321598   10057 main.go:141] libmachine: Parsing certificate...
	I0717 11:32:19.322149   10057 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:32:19.493258   10057 main.go:141] libmachine: Creating SSH key...
	I0717 11:32:19.575909   10057 main.go:141] libmachine: Creating Disk image...
	I0717 11:32:19.575916   10057 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:32:19.576133   10057 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubenet-306000/disk.qcow2.raw /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubenet-306000/disk.qcow2
	I0717 11:32:19.585694   10057 main.go:141] libmachine: STDOUT: 
	I0717 11:32:19.585708   10057 main.go:141] libmachine: STDERR: 
	I0717 11:32:19.585774   10057 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubenet-306000/disk.qcow2 +20000M
	I0717 11:32:19.594015   10057 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:32:19.594035   10057 main.go:141] libmachine: STDERR: 
	I0717 11:32:19.594057   10057 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubenet-306000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubenet-306000/disk.qcow2
	I0717 11:32:19.594062   10057 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:32:19.594069   10057 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:32:19.594119   10057 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubenet-306000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubenet-306000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubenet-306000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:d6:e5:b3:7e:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19282-6331/.minikube/machines/kubenet-306000/disk.qcow2
	I0717 11:32:19.595880   10057 main.go:141] libmachine: STDOUT: 
	I0717 11:32:19.595895   10057 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:32:19.595908   10057 client.go:171] duration metric: took 274.565167ms to LocalClient.Create
	I0717 11:32:21.598011   10057 start.go:128] duration metric: took 2.315385375s to createHost
	I0717 11:32:21.598062   10057 start.go:83] releasing machines lock for "kubenet-306000", held for 2.315606541s
	W0717 11:32:21.598210   10057 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-306000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-306000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:32:21.606591   10057 out.go:177] 
	W0717 11:32:21.614582   10057 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:32:21.614590   10057 out.go:239] * 
	* 
	W0717 11:32:21.615325   10057 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:32:21.626590   10057 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.89s)
panic: test timed out after 2h0m0s
running tests:
	TestStartStop (1h33m58s)
	TestStartStop/group/newest-cni (16m22s)
	TestStartStop/group/newest-cni/serial (16m22s)

                                                
                                                
goroutine 2840 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x30c
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x38

                                                
                                                
goroutine 1 [chan receive, 13 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x43c
testing.tRunner(0x140005d9520, 0x14000adfb98)
	/usr/local/go/src/testing/testing.go:1695 +0x128
testing.runTests(0x1400012c480, {0x106612c00, 0x2a, 0x2a}, {0x140000d00c0?, 0x14000b56d80?, 0x106635b60?})
	/usr/local/go/src/testing/testing.go:2159 +0x3b0
testing.(*M).Run(0x14000660c80)
	/usr/local/go/src/testing/testing.go:2027 +0x5a4
k8s.io/minikube/test/integration.TestMain(0x14000660c80)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x84
main.main()
	_testmain.go:131 +0x170

                                                
                                                
goroutine 22 [select]:
go.opencensus.io/stats/view.(*worker).start(0x1400059cd80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x88
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x98

                                                
                                                
goroutine 25 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0xec
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 24
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x19c

                                                
                                                
goroutine 224 [IO wait, 119 minutes]:
internal/poll.runtime_pollWait(0x12df4f910, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0xa0
internal/poll.(*pollDesc).wait(0x1400011a380?, 0x10220859c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x1400011a380)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x250
net.(*netFD).accept(0x1400011a380)
	/usr/local/go/src/net/fd_unix.go:172 +0x28
net.(*TCPListener).accept(0x140007236c0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x28
net.(*TCPListener).Accept(0x140007236c0)
	/usr/local/go/src/net/tcpsock.go:327 +0x2c
net/http.(*Server).Serve(0x140002a42d0, {0x1052c8990, 0x140007236c0})
	/usr/local/go/src/net/http/server.go:3260 +0x2a8
net/http.(*Server).ListenAndServe(0x140002a42d0)
	/usr/local/go/src/net/http/server.go:3189 +0x84
k8s.io/minikube/test/integration.startHTTPProxy.func1(0x140001efc80?, 0x140005d9a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x20
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 221
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x11c

                                                
                                                
goroutine 2781 [syscall, 18 minutes]:
syscall.syscall6(0x4?, 0x1400124da30?, 0x1000110?, 0x12df390f8?, 0x90?, 0x106fe45b8?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x68
syscall.wait4(0x1400124d9f8?, 0x1022b4778?, 0x90?, 0x1052120c0?)
	/usr/local/go/src/syscall/zsyscall_darwin_arm64.go:44 +0x4c
syscall.Wait4(0x14000826ec0?, 0x1400124da34, 0x1400053b680?, 0x10232dbec?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x28
os.(*Process).wait(0x14005290e40)
	/usr/local/go/src/os/exec_unix.go:43 +0x80
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0x140001ef680)
	/usr/local/go/src/os/exec/exec.go:901 +0x38
os/exec.(*Cmd).Run(0x140001ef680)
	/usr/local/go/src/os/exec/exec.go:608 +0x38
k8s.io/minikube/test/integration.Run(0x1400094a680, 0x140001ef680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x184
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x1400094a680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:162 +0x3f8
testing.tRunner(0x1400094a680, 0x140007a4400)
	/usr/local/go/src/testing/testing.go:1689 +0xec
created by testing.(*T).Run in goroutine 1845
	/usr/local/go/src/testing/testing.go:1742 +0x318

                                                
                                                
goroutine 2832 [IO wait, 18 minutes]:
internal/poll.runtime_pollWait(0x12df4f340, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0xa0
internal/poll.(*pollDesc).wait(0x14001a5b080?, 0x14000449800?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x14001a5b080, {0x14000449800, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x200
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x14000126458, {0x14000449800?, 0x14000090d18?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x70
bytes.(*Buffer).ReadFrom(0x14005252b10, {0x1052b07b8, 0x1400475e288})
	/usr/local/go/src/bytes/buffer.go:211 +0x90
io.copyBuffer({0x1052b08f8, 0x14005252b10}, {0x1052b07b8, 0x1400475e288}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x14000090f18?, {0x1052b08f8, 0x14005252b10})
	/usr/local/go/src/os/file.go:269 +0x5c
os.(*File).WriteTo(0x105ee8c30?, {0x1052b08f8?, 0x14005252b10?})
	/usr/local/go/src/os/file.go:247 +0x60
io.copyBuffer({0x1052b08f8, 0x14005252b10}, {0x1052b0878, 0x14000126458}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x44
os/exec.(*Cmd).Start.func2(0x140016411e0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x34
created by os/exec.(*Cmd).Start in goroutine 2781
	/usr/local/go/src/os/exec/exec.go:727 +0x7e0

                                                
                                                
goroutine 649 [chan send, 119 minutes]:
os/exec.(*Cmd).watchCtx(0x14005138f00, 0x14005178300)
	/usr/local/go/src/os/exec/exec.go:793 +0x2d4
created by os/exec.(*Cmd).Start in goroutine 648
	/usr/local/go/src/os/exec/exec.go:754 +0x7ac

                                                
                                                
goroutine 669 [chan send, 119 minutes]:
os/exec.(*Cmd).watchCtx(0x1400523c180, 0x140050c5c80)
	/usr/local/go/src/os/exec/exec.go:793 +0x2d4
created by os/exec.(*Cmd).Start in goroutine 668
	/usr/local/go/src/os/exec/exec.go:754 +0x7ac

                                                
                                                
goroutine 701 [chan send, 119 minutes]:
os/exec.(*Cmd).watchCtx(0x1400523cc00, 0x14005256360)
	/usr/local/go/src/os/exec/exec.go:793 +0x2d4
created by os/exec.(*Cmd).Start in goroutine 698
	/usr/local/go/src/os/exec/exec.go:754 +0x7ac

                                                
                                                
goroutine 1401 [chan receive, 94 minutes]:
testing.(*T).Run(0x14001696340, {0x103bf11c6?, 0x14000091758?}, 0x1052a5d00)
	/usr/local/go/src/testing/testing.go:1750 +0x32c
k8s.io/minikube/test/integration.TestStartStop(0x14001696340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x3c
testing.tRunner(0x14001696340, 0x1052a5ba0)
	/usr/local/go/src/testing/testing.go:1689 +0xec
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x318

                                                
                                                
goroutine 2849 [IO wait, 18 minutes]:
internal/poll.runtime_pollWait(0x12df4f438, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0xa0
internal/poll.(*pollDesc).wait(0x14001a5b140?, 0x14000449a00?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x14001a5b140, {0x14000449a00, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x200
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x14000126470, {0x14000449a00?, 0x14000091d18?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x70
bytes.(*Buffer).ReadFrom(0x14005252b40, {0x1052b07b8, 0x1400475e290})
	/usr/local/go/src/bytes/buffer.go:211 +0x90
io.copyBuffer({0x1052b08f8, 0x14005252b40}, {0x1052b07b8, 0x1400475e290}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x14000091f18?, {0x1052b08f8, 0x14005252b40})
	/usr/local/go/src/os/file.go:269 +0x5c
os.(*File).WriteTo(0x14000091fa8?, {0x1052b08f8?, 0x14005252b40?})
	/usr/local/go/src/os/file.go:247 +0x60
io.copyBuffer({0x1052b08f8, 0x14005252b40}, {0x1052b0878, 0x14000126470}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x44
os/exec.(*Cmd).Start.func2(0x140001ef200?)
	/usr/local/go/src/os/exec/exec.go:728 +0x34
created by os/exec.(*Cmd).Start in goroutine 2781
	/usr/local/go/src/os/exec/exec.go:727 +0x7e0

                                                
                                                
goroutine 742 [chan send, 119 minutes]:
os/exec.(*Cmd).watchCtx(0x1400523dc80, 0x14005256b40)
	/usr/local/go/src/os/exec/exec.go:793 +0x2d4
created by os/exec.(*Cmd).Start in goroutine 369
	/usr/local/go/src/os/exec/exec.go:754 +0x7ac

                                                
                                                
goroutine 1843 [chan receive, 48 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x43c
testing.tRunner(0x14001400000, 0x1052a5d00)
	/usr/local/go/src/testing/testing.go:1695 +0x128
created by testing.(*T).Run in goroutine 1401
	/usr/local/go/src/testing/testing.go:1742 +0x318

                                                
                                                
goroutine 699 [chan send, 119 minutes]:
os/exec.(*Cmd).watchCtx(0x1400523c900, 0x140052562a0)
	/usr/local/go/src/os/exec/exec.go:793 +0x2d4
created by os/exec.(*Cmd).Start in goroutine 698
	/usr/local/go/src/os/exec/exec.go:754 +0x7ac

                                                
                                                
goroutine 533 [chan send, 117 minutes]:
os/exec.(*Cmd).watchCtx(0x14004f23b00, 0x140050c4240)
	/usr/local/go/src/os/exec/exec.go:793 +0x2d4
created by os/exec.(*Cmd).Start in goroutine 532
	/usr/local/go/src/os/exec/exec.go:754 +0x7ac

                                                
                                                
goroutine 2850 [select, 18 minutes]:
os/exec.(*Cmd).watchCtx(0x140001ef680, 0x14005178d80)
	/usr/local/go/src/os/exec/exec.go:768 +0x7c
created by os/exec.(*Cmd).Start in goroutine 2781
	/usr/local/go/src/os/exec/exec.go:754 +0x7ac

                                                
                                                
goroutine 700 [chan send, 119 minutes]:
os/exec.(*Cmd).watchCtx(0x1400523ca80, 0x14005256300)
	/usr/local/go/src/os/exec/exec.go:793 +0x2d4
created by os/exec.(*Cmd).Start in goroutine 698
	/usr/local/go/src/os/exec/exec.go:754 +0x7ac

                                                
                                                
goroutine 1845 [chan receive, 18 minutes]:
testing.(*T).Run(0x140014004e0, {0x103bf282a?, 0x0?}, 0x140007a4400)
	/usr/local/go/src/testing/testing.go:1750 +0x32c
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x140014004e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0x854
testing.tRunner(0x140014004e0, 0x14000794200)
	/usr/local/go/src/testing/testing.go:1689 +0xec
created by testing.(*T).Run in goroutine 1843
	/usr/local/go/src/testing/testing.go:1742 +0x318

                                                
                                    

Test pass (69/212)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.2/json-events 6.25
13 TestDownloadOnly/v1.30.2/preload-exists 0
16 TestDownloadOnly/v1.30.2/kubectl 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.08
18 TestDownloadOnly/v1.30.2/DeleteAll 0.11
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-beta.0/json-events 6.24
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.11
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.28
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 10.75
48 TestErrorSpam/start 0.38
49 TestErrorSpam/status 0.09
50 TestErrorSpam/pause 0.12
51 TestErrorSpam/unpause 0.12
52 TestErrorSpam/stop 10.57
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 1.71
64 TestFunctional/serial/CacheCmd/cache/add_local 1.02
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.03
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.22
80 TestFunctional/parallel/DryRun 0.27
81 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/AddonsCmd 0.1
102 TestFunctional/parallel/License 0.19
103 TestFunctional/parallel/Version/short 0.04
110 TestFunctional/parallel/ImageCommands/Setup 1.82
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.07
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.09
135 TestFunctional/parallel/ProfileCmd/profile_list 0.08
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.08
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_echo-server_images 0.07
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 3.53
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.2
202 TestMainNoArgs 0.03
249 TestStoppedBinaryUpgrade/Setup 0.94
253 TestStoppedBinaryUpgrade/MinikubeLogs 0.7
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-716000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-716000: exit status 85 (91.2815ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-716000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT |          |
	|         | -p download-only-716000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 10:53:37
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 10:53:37.448205    6822 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:53:37.448367    6822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:53:37.448370    6822 out.go:304] Setting ErrFile to fd 2...
	I0717 10:53:37.448372    6822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:53:37.448532    6822 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	W0717 10:53:37.448658    6822 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19282-6331/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19282-6331/.minikube/config/config.json: no such file or directory
	I0717 10:53:37.449953    6822 out.go:298] Setting JSON to true
	I0717 10:53:37.465908    6822 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4989,"bootTime":1721233828,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 10:53:37.465981    6822 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:53:37.470876    6822 out.go:97] [download-only-716000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 10:53:37.470987    6822 notify.go:220] Checking for updates...
	W0717 10:53:37.471041    6822 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball: no such file or directory
	I0717 10:53:37.473884    6822 out.go:169] MINIKUBE_LOCATION=19282
	I0717 10:53:37.482860    6822 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 10:53:37.490821    6822 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:53:37.493893    6822 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:53:37.496843    6822 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	W0717 10:53:37.502852    6822 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 10:53:37.503043    6822 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:53:37.504518    6822 out.go:97] Using the qemu2 driver based on user configuration
	I0717 10:53:37.504537    6822 start.go:297] selected driver: qemu2
	I0717 10:53:37.504552    6822 start.go:901] validating driver "qemu2" against <nil>
	I0717 10:53:37.504637    6822 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 10:53:37.507820    6822 out.go:169] Automatically selected the socket_vmnet network
	I0717 10:53:37.513023    6822 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0717 10:53:37.513163    6822 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 10:53:37.513200    6822 cni.go:84] Creating CNI manager for ""
	I0717 10:53:37.513219    6822 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0717 10:53:37.513282    6822 start.go:340] cluster config:
	{Name:download-only-716000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-716000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:53:37.516891    6822 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:53:37.520829    6822 out.go:97] Downloading VM boot image ...
	I0717 10:53:37.520845    6822 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso
	I0717 10:53:41.707750    6822 out.go:97] Starting "download-only-716000" primary control-plane node in "download-only-716000" cluster
	I0717 10:53:41.707788    6822 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0717 10:53:41.763035    6822 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0717 10:53:41.763057    6822 cache.go:56] Caching tarball of preloaded images
	I0717 10:53:41.763204    6822 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0717 10:53:41.768310    6822 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0717 10:53:41.768317    6822 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0717 10:53:41.850504    6822 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0717 10:53:47.071999    6822 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0717 10:53:47.072149    6822 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0717 10:53:47.768666    6822 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0717 10:53:47.768854    6822 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/download-only-716000/config.json ...
	I0717 10:53:47.768886    6822 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/download-only-716000/config.json: {Name:mkcd9c2c4d5071025b18638894cd4ee6de6c5251 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:53:47.769146    6822 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0717 10:53:47.769333    6822 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0717 10:53:48.112704    6822 out.go:169] 
	W0717 10:53:48.116758    6822 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19282-6331/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108b71a60 0x108b71a60 0x108b71a60 0x108b71a60 0x108b71a60 0x108b71a60 0x108b71a60] Decompressors:map[bz2:0x1400098f4d0 gz:0x1400098f4d8 tar:0x1400098f470 tar.bz2:0x1400098f480 tar.gz:0x1400098f490 tar.xz:0x1400098f4a0 tar.zst:0x1400098f4c0 tbz2:0x1400098f480 tgz:0x1400098f490 txz:0x1400098f4a0 tzst:0x1400098f4c0 xz:0x1400098f4e0 zip:0x1400098f4f0 zst:0x1400098f4e8] Getters:map[file:0x140016e6630 http:0x140000b4d70 https:0x140000b4dc0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0717 10:53:48.116786    6822 out_reason.go:110] 
	W0717 10:53:48.123657    6822 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:53:48.127671    6822 out.go:169] 
	
	
	* The control-plane node download-only-716000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-716000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-716000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (6.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-580000 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-580000 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=qemu2 : (6.252722333s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (6.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
--- PASS: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-580000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-580000: exit status 85 (78.8095ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-716000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT |                     |
	|         | -p download-only-716000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT | 17 Jul 24 10:53 PDT |
	| delete  | -p download-only-716000        | download-only-716000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT | 17 Jul 24 10:53 PDT |
	| start   | -o=json --download-only        | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT |                     |
	|         | -p download-only-580000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 10:53:48
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 10:53:48.535053    6850 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:53:48.535189    6850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:53:48.535193    6850 out.go:304] Setting ErrFile to fd 2...
	I0717 10:53:48.535195    6850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:53:48.535340    6850 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 10:53:48.536355    6850 out.go:298] Setting JSON to true
	I0717 10:53:48.552367    6850 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5000,"bootTime":1721233828,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 10:53:48.552436    6850 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:53:48.555764    6850 out.go:97] [download-only-580000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 10:53:48.555878    6850 notify.go:220] Checking for updates...
	I0717 10:53:48.559682    6850 out.go:169] MINIKUBE_LOCATION=19282
	I0717 10:53:48.562715    6850 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 10:53:48.566536    6850 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:53:48.569695    6850 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:53:48.572691    6850 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	W0717 10:53:48.578713    6850 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 10:53:48.578855    6850 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:53:48.581650    6850 out.go:97] Using the qemu2 driver based on user configuration
	I0717 10:53:48.581658    6850 start.go:297] selected driver: qemu2
	I0717 10:53:48.581661    6850 start.go:901] validating driver "qemu2" against <nil>
	I0717 10:53:48.581704    6850 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 10:53:48.584711    6850 out.go:169] Automatically selected the socket_vmnet network
	I0717 10:53:48.589936    6850 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0717 10:53:48.590024    6850 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 10:53:48.590053    6850 cni.go:84] Creating CNI manager for ""
	I0717 10:53:48.590060    6850 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 10:53:48.590067    6850 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 10:53:48.590102    6850 start.go:340] cluster config:
	{Name:download-only-580000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-580000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:53:48.593757    6850 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:53:48.596655    6850 out.go:97] Starting "download-only-580000" primary control-plane node in "download-only-580000" cluster
	I0717 10:53:48.596664    6850 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:53:48.669431    6850 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 10:53:48.669453    6850 cache.go:56] Caching tarball of preloaded images
	I0717 10:53:48.669660    6850 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:53:48.674817    6850 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0717 10:53:48.674825    6850 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 ...
	I0717 10:53:48.751018    6850 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4?checksum=md5:3bd37d965c85173ac77cdcc664938efd -> /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 10:53:53.035404    6850 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 ...
	I0717 10:53:53.035563    6850 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-580000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-580000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-580000
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (6.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-152000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-152000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 : (6.235068291s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (6.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-152000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-152000: exit status 85 (78.359917ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-716000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT |                     |
	|         | -p download-only-716000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT | 17 Jul 24 10:53 PDT |
	| delete  | -p download-only-716000             | download-only-716000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT | 17 Jul 24 10:53 PDT |
	| start   | -o=json --download-only             | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT |                     |
	|         | -p download-only-580000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT | 17 Jul 24 10:53 PDT |
	| delete  | -p download-only-580000             | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT | 17 Jul 24 10:53 PDT |
	| start   | -o=json --download-only             | download-only-152000 | jenkins | v1.33.1 | 17 Jul 24 10:53 PDT |                     |
	|         | -p download-only-152000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 10:53:55
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 10:53:55.079297    6872 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:53:55.079434    6872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:53:55.079437    6872 out.go:304] Setting ErrFile to fd 2...
	I0717 10:53:55.079444    6872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:53:55.079574    6872 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 10:53:55.080591    6872 out.go:298] Setting JSON to true
	I0717 10:53:55.096666    6872 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5007,"bootTime":1721233828,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 10:53:55.096732    6872 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:53:55.101813    6872 out.go:97] [download-only-152000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 10:53:55.101903    6872 notify.go:220] Checking for updates...
	I0717 10:53:55.105755    6872 out.go:169] MINIKUBE_LOCATION=19282
	I0717 10:53:55.109732    6872 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 10:53:55.114787    6872 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:53:55.116121    6872 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:53:55.118705    6872 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	W0717 10:53:55.124689    6872 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 10:53:55.124827    6872 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:53:55.127658    6872 out.go:97] Using the qemu2 driver based on user configuration
	I0717 10:53:55.127670    6872 start.go:297] selected driver: qemu2
	I0717 10:53:55.127674    6872 start.go:901] validating driver "qemu2" against <nil>
	I0717 10:53:55.127751    6872 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 10:53:55.130735    6872 out.go:169] Automatically selected the socket_vmnet network
	I0717 10:53:55.135941    6872 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0717 10:53:55.136032    6872 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 10:53:55.136051    6872 cni.go:84] Creating CNI manager for ""
	I0717 10:53:55.136059    6872 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 10:53:55.136065    6872 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 10:53:55.136106    6872 start.go:340] cluster config:
	{Name:download-only-152000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-152000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:53:55.139634    6872 iso.go:125] acquiring lock: {Name:mk9f89fc1c9e2bb28471b9516bdd5d0ade49e59d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:53:55.142625    6872 out.go:97] Starting "download-only-152000" primary control-plane node in "download-only-152000" cluster
	I0717 10:53:55.142632    6872 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0717 10:53:55.202119    6872 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0717 10:53:55.202137    6872 cache.go:56] Caching tarball of preloaded images
	I0717 10:53:55.202309    6872 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0717 10:53:55.206714    6872 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0717 10:53:55.206721    6872 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0717 10:53:55.286462    6872 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:5025ece13368183bde5a7f01207f4bc3 -> /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0717 10:53:59.212994    6872 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0717 10:53:59.213154    6872 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0717 10:53:59.731724    6872 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0717 10:53:59.731937    6872 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/download-only-152000/config.json ...
	I0717 10:53:59.731958    6872 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19282-6331/.minikube/profiles/download-only-152000/config.json: {Name:mka78078f79ed1b919633911f484d673782cf831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:53:59.732198    6872 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0717 10:53:59.732322    6872 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19282-6331/.minikube/cache/darwin/arm64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-152000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-152000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-152000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.28s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-527000 --alsologtostderr --binary-mirror http://127.0.0.1:51087 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-527000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-527000
--- PASS: TestBinaryMirror (0.28s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-562000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-562000: exit status 85 (60.408084ms)

                                                
                                                
-- stdout --
	* Profile "addons-562000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-562000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-562000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-562000: exit status 85 (56.588708ms)

                                                
                                                
-- stdout --
	* Profile "addons-562000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-562000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.75s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.75s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 status: exit status 7 (30.648083ms)

                                                
                                                
-- stdout --
	nospam-358000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 status: exit status 7 (30.192666ms)

                                                
                                                
-- stdout --
	nospam-358000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 status: exit status 7 (29.20225ms)

                                                
                                                
-- stdout --
	nospam-358000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 pause: exit status 83 (39.891208ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-358000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-358000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 pause: exit status 83 (38.966084ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-358000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-358000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 pause: exit status 83 (38.6445ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-358000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-358000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 unpause: exit status 83 (38.869ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-358000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-358000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 unpause: exit status 83 (39.971625ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-358000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-358000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 unpause: exit status 83 (37.054375ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-358000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-358000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (10.57s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 stop: (3.741300875s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 stop: (3.422256959s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-358000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-358000 stop: (3.402454083s)
--- PASS: TestErrorSpam/stop (10.57s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19282-6331/.minikube/files/etc/test/nested/copy/6820/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-208000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1992833151/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 cache add minikube-local-cache-test:functional-208000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 cache delete minikube-local-cache-test:functional-208000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-208000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 config get cpus: exit status 14 (28.967709ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 config get cpus: exit status 14 (34.085292ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-208000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-208000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (159.148541ms)

                                                
                                                
-- stdout --
	* [functional-208000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:55:38.751265    7460 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:55:38.751447    7460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:38.751452    7460 out.go:304] Setting ErrFile to fd 2...
	I0717 10:55:38.751455    7460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:38.751651    7460 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 10:55:38.752933    7460 out.go:298] Setting JSON to false
	I0717 10:55:38.772645    7460 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5110,"bootTime":1721233828,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 10:55:38.772715    7460 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:55:38.776955    7460 out.go:177] * [functional-208000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 10:55:38.783857    7460 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 10:55:38.783920    7460 notify.go:220] Checking for updates...
	I0717 10:55:38.789804    7460 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 10:55:38.792815    7460 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:55:38.795820    7460 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:55:38.798895    7460 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 10:55:38.801859    7460 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:55:38.805154    7460 config.go:182] Loaded profile config "functional-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:55:38.805458    7460 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:55:38.809798    7460 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 10:55:38.816794    7460 start.go:297] selected driver: qemu2
	I0717 10:55:38.816801    7460 start.go:901] validating driver "qemu2" against &{Name:functional-208000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:functional-208000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:55:38.816873    7460 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:55:38.823838    7460 out.go:177] 
	W0717 10:55:38.827807    7460 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0717 10:55:38.831852    7460 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-208000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-208000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-208000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (111.98075ms)

                                                
                                                
-- stdout --
	* [functional-208000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:55:38.977300    7471 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:55:38.977406    7471 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:38.977409    7471 out.go:304] Setting ErrFile to fd 2...
	I0717 10:55:38.977412    7471 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:38.977531    7471 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19282-6331/.minikube/bin
	I0717 10:55:38.978892    7471 out.go:298] Setting JSON to false
	I0717 10:55:38.995624    7471 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5110,"bootTime":1721233828,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0717 10:55:38.995701    7471 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:55:39.000907    7471 out.go:177] * [functional-208000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0717 10:55:39.007870    7471 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 10:55:39.007907    7471 notify.go:220] Checking for updates...
	I0717 10:55:39.014845    7471 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	I0717 10:55:39.017839    7471 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:55:39.020838    7471 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:55:39.023840    7471 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	I0717 10:55:39.026817    7471 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:55:39.030110    7471 config.go:182] Loaded profile config "functional-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:55:39.030374    7471 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:55:39.034847    7471 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0717 10:55:39.041802    7471 start.go:297] selected driver: qemu2
	I0717 10:55:39.041808    7471 start.go:901] validating driver "qemu2" against &{Name:functional-208000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:functional-208000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:55:39.041852    7471 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:55:39.048848    7471 out.go:177] 
	W0717 10:55:39.052913    7471 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0717 10:55:39.056828    7471 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.797699375s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-208000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-208000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 image rm docker.io/kicbase/echo-server:functional-208000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-208000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 image save --daemon docker.io/kicbase/echo-server:functional-208000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-208000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "46.034542ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "33.836833ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "46.016083ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "32.6935ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.010608666s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-208000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-208000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-208000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-208000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.53s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-288000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-288000 --output=json --user=testUser: (3.525503167s)
--- PASS: TestJSONOutput/stop/Command (3.53s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-480000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-480000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (93.999166ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"07359353-b307-4a79-b095-445ff694f8ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-480000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c792e0a1-b0df-492d-aeca-3de760897741","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19282"}}
	{"specversion":"1.0","id":"4b4250e8-f460-4ce1-a3af-67eeea4386e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig"}}
	{"specversion":"1.0","id":"5ef43190-d47e-4f71-ba0d-4ad0e0f13957","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"eee83e6b-7865-4797-b62c-c540b7870aef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a620f8e5-ea4f-48bb-9e1a-3ee4fff24036","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube"}}
	{"specversion":"1.0","id":"897f68ea-eb42-433c-949d-d9e51245895d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"26f66ca9-3561-4d70-ac20-4872d775d58d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-480000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-480000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-058000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-813000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-813000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (95.292666ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-813000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19282
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19282-6331/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19282-6331/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-813000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-813000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (39.516875ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-813000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-813000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    

Test skip (23/212)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-208000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1921494802/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721238903836045000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1921494802/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721238903836045000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1921494802/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721238903836045000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1921494802/001/test-1721238903836045000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (54.502333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.07275ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.631416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.310334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.530667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.533125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.017708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.808ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "sudo umount -f /mount-9p": exit status 83 (48.67325ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-208000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-208000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1921494802/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (13.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (9.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-208000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port526495079/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (62.859584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.019167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (82.323583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.551791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (80.929875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.899292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.308167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "sudo umount -f /mount-9p": exit status 83 (44.607708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-208000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-208000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port526495079/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (9.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (11.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-208000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3494190541/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-208000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3494190541/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-208000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3494190541/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T" /mount1: exit status 83 (74.2965ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T" /mount1: exit status 83 (84.903708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T" /mount1: exit status 83 (85.773875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T" /mount1: exit status 83 (87.720625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T" /mount1: exit status 83 (85.084292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T" /mount1: exit status 83 (84.383791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-208000 ssh "findmnt -T" /mount1: exit status 83 (88.724333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-208000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-208000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-208000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3494190541/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-208000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3494190541/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-208000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3494190541/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (11.27s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-306000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-306000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-306000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-306000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-306000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-306000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-306000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-306000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-306000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-306000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-306000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-306000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-306000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-306000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-306000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-306000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-306000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-306000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-306000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-306000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-306000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-306000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-306000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-306000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-306000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-306000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-306000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-306000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-306000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-306000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-306000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-306000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-306000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-306000"

                                                
                                                
----------------------- debugLogs end: cilium-306000 [took: 2.175439208s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-306000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-306000
--- SKIP: TestNetworkPlugins/group/cilium (2.28s)

                                                
                                    
Copied to clipboard